Aubergine Society

2022.04.16 S02E19

In our discussion of the confused stances we focused on Mission and Materialism because those had the most relevance to our personal experiences, moving frequently between higher purpose and mundane goals. Unlike the others present, I was never a teenage communist, but I will confess to being a teenage Satanist for much the same reasons, it was rebellious and fun at the time.

The Missions that we were most familiar with are AI risk and EA. @Evan_McMullen noted that these communities seemed to be influenced by Silicon Valley culture, becoming more amenable to meditation and psychedelics in recent years…

We got on the tangent of (epi)genetic memories which somehow led to sensory deprivation tanks and John C Lilly’s ECCO concept.

You must know/assume/simulate our existence in ECCO

The aforementioned condition reminded us of Roko’s basilisk, but also the Christian God which we renamed Jesus’s basilisk.

Continuing on the topic of genetics I learned of the so-called identical ancestor point from this brilliant Numberphile video:

Surprisingly, the IAP for humans is only 5-15 thousand years ago, meaning everyone that was alive then that has a suriving ancestor today is an ancestor to everyone alive today (though not equally related).

@dglickman recommends

Synchronicities… On this day (also my birthday):

1943 – Albert Hofmann accidentally discovers the hallucinogenic effects of the research drug LSD.

1961 – In a nationally broadcast speech, Cuban leader Fidel Castro declares that he is a Marxist–Leninist and that Cuba is going to adopt Communism.

2022.04.23 S02E20

Just me and @dglickman holding down the fort for this session. We started with a discussion of emptiness prompted by this quote from the first article…

There is probably some sort of connection between nebulosity and emptiness. However, I think non-existence is mostly a red herring, and Nagarjuña’s four-fold logic has no obvious similarity with the method I present.

I had considered the connection previously, but that led me to contend that “emptiness” was likely a poor translation. I was thinking that the common senses of the English word, along the lines of empty space (the vacuum), the empty set (mathematics), or just the property of containers that contain nothing, don’t really capture the Buddhist sense. TIL thanks to Daniel that the sanskrit word is Śūnyatā. He made a valiant attempt to explain but alas by the end I was feeling even more confused.

I was reminded of this classic:

I attempted to explain how I distinguish between the notions of hallucination and illusion. A hallucination is the perception of something that isn’t really there, while an illusion is a misinterpretation of a real perception (like an optical illusion or a rainbow). Daniel convinced me that the categories blur together in some cases.

I declared that I now identify as an “evo-rat”, short for “evolutionary rationalist”. The idea is that rationality is essentially an evolutionary process of variation and selection, conjecture and criticism (largely inspired by Chapman and Vervaeke and Deutsch). Bayesian (ortho) rationalists focus too much on the selection side of the process, which is of course necessary but not sufficient.

The idea came to me while bingeing on ToKCast:

I announced I had made a significant move closer to the post-rat cluster since eigenrobot followed me. We discussed who else is near the core of the post-rats and agreed QC might be the poster-child.

Back to Chapman and the 2nd article, we agreed that Meaningness ethics seems to rule out Deontology:

Similar situations often seem to have dissimilar ethical implications; right action seems to have unlimited dependence on the context.

We also agreed that Utilitarians are a kind of Consequentialist, we both were (mostly) Consequentialists and not Utilitarians. (Surprising amount of agreement today)

On the topic of current taboos we discussed [redacted] and [redacted] and IQ. On the latter, I noted that social justice activists never talk about how the left half of the bell curve are the most marginalized people in our society. In my career of working with over 1000 people at a dozen different high tech companies, I wouldn’t be surprised if almost all had above-average IQ.

Daniel recommends:

2022.04.30 S02E21

Before we even started on today’s material we got into an interesting tangent on the origin of war coinciding with the rise of agriculture and the State.

TIL metametaphysics is a thing and the Bible claims that locusts have 4 legs (maybe an honest mistake or maybe something like a “calling a deer a horse” test? :smiley: ).

I started off the discussion by asking the question: Is there any difference in behavior between someone that shifts between stances and someone who adopts the complete stance? I suspected that there was, but wanted to explore the distinctions. Skipping ahead, we found some answers in the schematic overview, a theme running down the 3rd column (complete stance) was play, humor, and light-hearted engagement. The confused stances take themselves too seriously.

I mentioned a Chapman quote I found buried in a reply to a comment recently:

Meaningness is disguised secularized Vajrayana

@Evan_McMullen thought this was obvious, being familiar with Vajrayana

All of us tended to agree with complete stance column already, but I confessed I wavered a bit on the romantic rebellion stance in the Social Authority table, as I still identify as anti-authoritarian. I do value social order, I just don’t think placing authority in a government is the best way to achieve that. Daniel and Evan sympathized to some extent, but advised me to focus and reject the “romantic” aspect of the rebellion which was a fair point.

On a similar note, Evan wavered a bit on the Ethical Eternalism stance, wondering if it was beneficial to believe that ethics has a foundation even if it doesn’t really? That led to an interesting discussion of the US Constitution (and when it all went wrong) that Evan says he worships as a dead god, quoting Lovecraft:

That is not dead which can eternal lie, / And with strange aeons even death may die.
That Is Not Dead - Wikipedia

Some other topics that came up:

My subversion of the latest philosophy meme:

Miss Information (h/t Evan for that one)

2022.05.07 S02E22

Just me and @dglickman holding down the fort this week. We kicked it off with true confession time, the example of the extramarital affair in the first article hit way too close to home :grimacing: . It was 22 years ago, not 2, and has a happier ending (my 2nd marriage and 15-year-old son), but I definitely related to the philosophical turmoil described by Chapman.

Daniel asked if the Meaningness book would have helped at the time. I suggested that it might have in the sense that it would be good to know that the confused stances are unstable, so it may be possible to modify one’s values so that you don’t desire something that is impossible to attain. Daniel raised the risk of deliberately changing one’s values, which was well taken. I think sometimes you do want to alter your desires, mentioning addiction as an example. You desire the addictive substance or whatever one on level, but wish you didn’t on another level. I compared it applying CEV at the level of the individual.

I made the claim that all humans are necessarily incoherent internally due to our biological and physical limitations, but the main difference I look for in others in whether they interpret incoherence as a problem. I admitted that that is just another value (valuing coherence) but Daniel convinced me that it was a special value in so far as it applies to other values.

We moved on to discuss Chapman’s example of extreme meaninglessness:

A tiny gray pebble slides half an inch down a slope on a lifeless planet a million light-years from the nearest star. No being ever knows about this, and nothing happens as a result of it.

I took this opportunity to pitch my theory that Chapman was missing something fundamental about meaning, and that is that it could have a technical definition rooted in information theory. An event has meaning to the extent that it has downstream effects that someone cares about. For example all events outside our lightcone cannot possibly be meaningful to us. And there could exist events that have a tremendous amount of meaning to someone but they might be unware of it until possibly later (or maybe not all). Daniel demurred, saying that the physical information part was superfluous, all that matterered was that someone cared about the event. I’ll have to give this objection more thought, the theory is still pretty nebulous.

2022.05.14 S02E23

The first question I asked today was how far you have to go back into the history of life on Earth to find the origin of meaning in the Chapman sense. Evan proposed early mammals (like 80 MYA), and I replied if mammals, why not dinosaurs (thinking about how there is good evidence that some were behaviourally very similar to mammals in that they were social animals and raised they young). We all agreed (Daniel too I think) that amphibians seem to be relatively very stupid, so perhaps meaning arose some time between reptiles and dinosaurs.

We discussed possible relations between meaning and fun and play which led to the first deep dive on the relation between civilization and domestication.

Yes, that is Scully wearing a pussy hat. :smiling_face:

Apparently Michael Vassar has a theory about how domestication results necessarilty in the attenuation of olfactory senses:

The next deep dive was on the origins of civilization itself and how we keep pushing the date back (a la Samo Burja) but that raises the question of how we define early civilizations.

I brought up Graeber’s posthumously published book as lending to support to Evan’s claim that civilization tends to be good for the group but not for the individual. Daniel pushed back on my claim that North American indigineous tribes were relatively less domesticated, given that they were still neolithic when the Europeans arrived. I was thinking about Graeber’s claim that Europeans that were integrated into indiginous tribes tended to want to stay but not vice versa.

After discussing the difference between the Mandate of Heaven and the Divine Right of Kings (the former is conditional), I got into more trouble by suggesting that peak USA was around 2000. I was thinking it has been declining ever since the endless Wars on Terror started. This led to an interesting discussion on how you would measure something like that and I proposed the best overall measure (as long as it isn’t gamed) is life expectancy. Even then, I suggested that trying to game that metric would likely backfire and reduce life expectancy.

On the distincition between weak and strong emergence I referenced a recent podcast about consciousness. Turns out Daniel watched the same one and we all agreed that Philip Goff is the worst. Not really, but we are not at all impressed by his arguments.

Right on cue

2022.05.21 S02E24

Though we didn’t really have much experience with casinos we could relate to the (temporary) appeal of eternalism somewhat through psychedelic experiences. We discussed nihilist art, the relationship between skepticism and nihilism, and how current society might react to a modern-day Diogenes.

@Evan_McMullen suggested that Timothy Leary might be considered a modern-day Socrates, and I got to relate my close encounter with Leary at the Cyberarts conference circa '91. I was waiting in line to try out Jaron Lanier’s demo VR system and an elderly gentlemen was inserted in line ahead of me by the conference organizers. I was a bit annoyed until I recognized Leary who went on to become a great advocate of VR over the next few years.

The deep dive of the week was mostly about ketamine and its somewhat surprising approval by the FDA, its effect on luck, and neural annealing.

Another interesting tangent was prompted by this Chapman quote:

If meaningness was merely subjective, it would not be possible to be wrong about it.

Is that a good criterion for “subjective”, something that you can’t be wrong about? Can you be wrong about qualia? Like, those checker squares definitely look like different shades of grey to me even though I know intellectually that they are the same color…

My bet is that Len Sassaman was Satoshi

Coincidentally right after the meeting I was catching up on the #physics club material for the week and Sean Carroll was talking about self as a process in the context of identity across time:

2022.05.28 S02E25

All the pages under

We started with @Evan_McMullen noting that the concept of “invasive species” is a kind of eternalism. After all, species invade new territory all the time, and there is no right answer to which species deserves a particular territory.

I confessed that I failed the test to draw a bicycle from memory, even though I was familiar with the test and spent many years riding bicycles. Evan says he passed because he has spent a lot of time repairing bikes. I mentioned how impressed I was that there have been significant advances in manual can opener technology in the last few years that required no redesign of cans. Did anyone see that coming? I wonder if there are other cases like that just waiting for new inventions. Now that I think about it, there are indeed better mousetraps.

We agreed that the Illusion of Understanding was a stellar article. I observed how it was something like the illusion of detail and color at the periphery of our visual field.

The deep dive of the day was about the latest “current thing”, i.e. school shootings and gun control. I mentioned this in the context of how people tend to become less confident in their favored political policies when asked to explain how they would work. Like clockwork, the media is filled with calls for more gun control, but no one can explain what to realistically do with the several hundred million guns already in the US, let alone how that would prevent the school shootings. This latest one in Uvalde, TX was particularly bad because if anything it illustrates how police can be depended on to help. Maybe the age of majority could be raised to 21, but that has a lot of other consequences.

Somehow this led to a long tangent on communism, planned economies, AI, and a rant about headphone jacks for phones (blaming Jony Ive) and software bloat. :rofl:

Also TIL

and that humans are more closely related to mice (MRCA lived 87 MYA) than we are to most other mammals (94 MYA according to TimeTree)

2022.06.04 S02E26

First time in a long time we’ve had a full house with @Evan_McMullen, @dglickman, @Valeria, and @Sahil. We started by revisiting a topic from last week, the so-called Heinz dilemma:

A woman was near death from a special kind of cancer. There was one drug that the doctors thought might save her. It was a form of radium that a druggist in the same town had recently discovered. The drug was expensive to make, but the druggist was charging ten times what the drug cost him to produce. He paid $200 for the radium and charged $2,000 for a small dose of the drug. The sick woman’s husband, Heinz, went to everyone he knew to borrow the money, but he could only get together about $1,000 which is half of what it cost. He told the druggist that his wife was dying and asked him to sell it cheaper or let him pay later. But the druggist said: “No, I discovered the drug and I’m going to make money from it.” So Heinz got desperate and broke into the man’s laboratory to steal the drug for his wife. Should Heinz have broken into the laboratory to steal the drug for his wife? Why or why not?

It seems that none of us would fault Heinz for stealing the drug, but we differed on what would be an appropriate response. I suggested that Heinz is merely in debt to the druggist for $2000 plus damages for the break-in. Maybe he could raise the money after the fact, or work it off or something. Evan suggested that Heinz might negotiate a deal where he didn’t go to the news outlets with the story of how the druggist was being a dick. Sahil objected to calling the druggist a dick on the grounds that society should not expect everyone to be in a position of taking on counterparty risk (for example).

We moved on to this week’s readings with a discussion of the harm of eternalism (“show us on this doll where eternalism touched you”). Valeria surprised us (or at least me) by declaring the a cosmic plan exists, it just seems otherwise at times because we can’t understand it. I wasn’t sure if she really believed this or was just playing devil’s advocate, and she remained coy. On a related note, Daniel observed that there is a difference between being able to conceive of something and that same thing being possible, e.g. p-zombies.

We pivoted to discussing a question Sahil asked in Discord last week:

Here’s a fun question: if you had to write an epistemic status for meaningness posts, what would it look like?

Evan suggested that Meaningness taken as a whole doesn’t really have an epistemic status, rather it should be viewed as a design pattern.

We did an experiment for Valeria, who asked if we could could parse this quote without rereading (I, for one, could not)…

The inquiry into religion attempted here proceeds by way of problems judged to lie hidden at the ground of the historical frontier we call “the modern world”.

It was from this book:

The deep dive this week was a discussion on the meaning of meaninglessness, and how some drugs can turn up the experience of deep meaning without affecting much else. Evan offered a quote from the Glass Bead Game that capture this feeling:

I suddenly realized that in the language, or at any rate in the spirit of the Glass Bead Game, everything actually was all-meaningful, that every symbol and combination of symbol led not hither and yon, not to single examples, experiments, and proofs, but into the center, the mystery and innermost heart of the world, into primal knowledge. Every transition from major to minor in a sonata, every transformation of a myth or a religious cult, every classical or artistic formulation was, I realized in that flashing moment, if seen with truly a meditative mind, nothing but a direct route into the interior of the cosmic mystery, where in the alternation between inhaling and exhaling,

We finished with a discussion of whether professional ethicists were more ethical than average (research says probably not):

2022.06.11 S02E27

@Evan_McMullen mentioned that he had recently met up with Matt Arnold who produces the audio version of

Rumor has it that Chapman may attend a meaningness meetup in Detroit this fall. Seems like an excellent opportunity for the Aubergine Society to convene in person.

We discussed our favourite eternalist ploys, including smearing (not to be confused with schmearing) and kitsch.

Riffing on what Chapman said:

I am unsure about my current list of ploys. They seem to overlap and run into each other somewhat, and I also expect I may find more of them. I may need to “refactor” the categories. Feedback about this would be welcome!

I proposed a new ploy that is kind of a combo of pretending and colluding, namely LARPing. While I conceded to Evan that most actual LARPers are self-aware, I contend that most Eternalists are self-aware on some level, at least they act as if they are.

I wondered if QAnoners were LARPing or actually insane. Evan said both and recommended a documentary:

After reading through all the ploys I noted two observations:

  1. Each ploy can be seen as increasing stupidity
  2. By the end I no longer saw the appeal of Eternalism. It would be bad if Eternalism of any kind was true.

Two explain my latter claim, I tried to make an analogy with math and Goerdel’s incompleteness theorem. I suggested it would be bad for math if it was actually as simple as deriving everything mechanically from a few axioms (at least from a meaning perspective).

I tried to bolster my claim that mathematics has a deeply embedded randomness against @dglickman 's objections by citing Chaitan’s Meta Math! book, but it has been too long since I read it remember the arguments, alas.

We took a bit of detour into text compression as AI, and intelligence as compression. I got an opportunity to link my OEIS sequence and mention my first AI prof:

2022.06.18 S02E28

The Aubergine Society welcomed newcomer John to the meeting. We started by discussing the apparent recent invasion of Nihilists in the comments. It looked like a reddit brigade but I was unable to find the source. Chapman was forced to close comments on the page, I suspect for the first time ever.

We revisited the meaning of “real”, prompted by this Chapman quote:

This is also wrong; nebulous meanings are “real,” for any reasonable definition of “real.”

I made a case for the David Deutsch view (likely inherited from Popper), that something is real if and only if it figures in your best explanation. We discussed some of the implications, like the reality of entities can change across time and people.

Valeria brought up this classic dialog from The Matrix:

Agent Smith : Why, Mr. Anderson? Why, why? Why do you do it? Why, why get up? Why keep fighting? Do you believe you’re fighting… for something? For more than your survival? Can you tell me what it is? Do you even know? Is it freedom? Or truth? Perhaps peace? Could it be for love? Illusions, Mr. Anderson. Vagaries of perception. Temporary constructs of a feeble human intellect trying desperately to justify an existence that is without meaning or purpose. And all of them as artificial as the Matrix itself, although… only a human mind could invent something as insipid as love. You must be able to see it, Mr. Anderson. You must know it by now. You can’t win. It’s pointless to keep fighting. Why, Mr. Anderson? Why? Why do you persist?
Neo : Because I choose to.
Agent Smith : Wait. I’ve seen this. I stand here, right here, and I’m supposed to say something. I say, “Everything that has a beginning has an end, Neo.”
Agent Smith : What? What did I just say?

The deep dive of the week was prompted by a recent AI story in the news: What is sentience and how can it be detected?

No conclusions, but in the end I was made to feel slightly bad for torturing a simulation of a thermostat in my mind. :face_with_spiral_eyes:

New from Vervaeke:

I recommend Jake Orthwein attempting to explain Chapman to a fellow critrat:

Daniel mentioned Karl Friston has a new book out:

TIL 2 new words: saudade (h/t Valeria) and Mitfreude (h/t John)

Obligatory nihilist scene from The Big Lewbowski…

This may be a good selection to discuss when we are finished with Meaningness. My copy is already on its way.

2022.06.25 S02E29

Much of the discussion today (with @Evan_McMullen, @dglickman, and John) revolved around the neurochemical basis of meaning and the potential for psychoactive mediation. If a drug like 5-MeO-DMT can enhance meaning (at least along one dimension, Evan was careful to tease apart significance and motivation in meaning), then are their other drugs that have the opposite effect, leading to nihilism? Almost certainly.

Friend of the Stoa, Andrés Gómez Emilsson is doing some very interesting related work at QRI:

Some of Andrés’s ideas about treating catatonia reminded me of the Robin Williams movie based on the Oliver Sacks book:

Though the new renaissance in psychedelics is somewhat encouraging, we revisited the topic of the danger of gurus, and the practice of medicalizing transformative experiences, mentioning Ram Dass as an example.

The discussion of black magic led to a brainstorming on what counts as modern-day magicians. I suggested software programmers spend their time figuring out arcane incantations in order to invoke real-world results. Evan and Daniel agreed modern fab units like ASML come close to magic.

Drawing upon D&D magic user specialties, we might consider movie makers to be master illusionists, and entrepreneurs as conjurerers.

2022.07.02 S02E30

Just me and @dglickman holding down the fort this week. I started by mentioning the other book club I’ve been attending the last couple months, Foresight Institute’s

I attempted to explain Robin Hanson’s grabby aliens model:

In one of his original posts on the topic Hanson says:

It looks like there is a non-trivial chance that we here on Earth will give birth to such an GC near here. And soon. (Say within a million years.)

We discussed the mindset of longtermism that would consider a million years “soon”, and its discontents, nobably:

This led to a long tangent on the philosophy of discount rates. My theory is that discount rates encode uncertainty which necessarily increases with time. Daniel disagreed and had some good counterpoints. Drawing on a vague memory, I mentioned that studies showed that humans tend to implicitly use a hyperbolic discount rate whereas an exponential one would be ideal. Daniel asked a good question, who can say what is ideal here? After a couple false starts, I suggested that perhaps a simulation of an ecosystem containing populations of interacting hyperbolic and exponential discounters might show the latter win in the long run. We discussed whether rationalists actually do win generally, considering recent revelations from the BA rationalist community.

I spent a bit of time waxing nostalgic about my participation in nihilist-adjacent subcultures, various mixes of punk, goth, and industrial scenes in the 90s and 00s.

Based on Chapman’s reco we considered adding Camus’s The Rebel to the reading list:

Finally we practiced a bit of nihilizing with a discussion of recent American political events, from J6 to the overturning of Roe v Wade just last week. While I think SCOTUS was on firm legal grounds there, I confessed to being a pro-choice extremist, believing that the mother has absolute authority over the life of the pre-natal human for as long as they share a blood supply through the umbilical cord. Concerning J6, I didn’t mention it in the meeting but I consider an unarmed insurrection to be an oxymoron.

2022.07.09 S02E31

Quite a wide-ranging discussion thanks to @Evan_McMullen and @dglickman, occasionally touching on the topic of nihilism like a flat stone skipping across a still pond.

Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not all conflicts are merely misunderstandings.

We started with this quote from Sarah Constantin providing an example of something insightful from Michael Vassar as requested by Wei Dai prompted by Anna Salamon’s reply to Scott Alexander’s post (Rule Thinkers In, Not Out - LessWrong) Evan suggested the current mental health crisis could be viewed as demon possession using this sense of “evil”.

Apparently mathematicians that delve deeply into the Continuum Hypothesis tend to go insane, which reminded me of this classic textbook intro:

Since Nietzsche is Chapman’s favorite philosopher we discussed which would be the best book to include in this salon. Contenders:

We agreed Heidegger would probably be better if he had written in English, but perhaps even better if written in latin, old norse, or sanskrit. This led to a long tangent on the differences and merits of various human languages.


Good discussion of extremely ancient cities and Graeber’s last book:

Wokeism as a new religion and Bogghosian’s Great Realignment theory:

The rabbit hole of the week was around the notion of Kyriarchy and its relation to Western Civ/Anglosphere/Game A as a self-terminating strategy…

Are chakras real? Evan had an interesting theory of chakras as kind of a communication mechanism between low-level subconscious processes and consciousness using the stable body map part of the mind.

Just to note an idea from Evan: start a trend on twitter where we list our top 10 Stoa sessions as a youtube playlist. TBD

2022.07.16 S02E32


Before @Evan_McMullen could join while on the road in PA, @dglickman and I were discussing the latest Making Sense podcast (general pessimism about the near future)

I mentioned Brett Hall (ToKCast) was coming out with a rebuttal from the general optimism perspective:

When Evan joined we quickly pivoted to Roko’s basilisk and I was surprised to learn that the LW elites still take it quite seriously despite the damage control discounting the matter when it was widely mocked by Rationalist detractors.

I noted that when I originally heard of the Basilisk, it immediately reminded me of the basic contours of the Judeo-Christian god (i.e. act as if you believe and obey me now to avoid infinite punishment later). We spent a bit of time trying to come up with reasons why the Christianity egregore might have been a net positive on civilization but eventually gave up.

Evan recommends an alternate history book:

We rounded out the discussion, tying it back to nihilism, talking about how EA seems designed to generate meaningfulness for secular rationalists (and how no one should be surprised that it turned into a bit of a cult), and whether suicide could be considered a perfected form of nihilism (fair arguments on both sides of that claim).

2022.07.23 S02E33


Thanks to @dglickman for joining the final session on the Nihilism section. We started by trying to answer a relatively easy metaphysical question: Why is there something rather than nothing? Though we disagreed on the response, we did agree to reject the options. For me, it is not an either/or question, but rather a both/and. We do indeed observe something in our timeline, but I suggest that our understanding of physics means that there are alternate Everettian timelines with nothing (either because of different physical laws or different initial conditions, some non-zero measure of alternate timelines have no matter or energy or space or time, and are therefore indistinguishable from nothing).

From there, we transitioned to Occam’s razor as a method of model selection, and then did a deep dive into pop culture and TV tropes and modern myths (including Westworld, For All Mankind, Stranger Things, The Sopranos, Game of Thrones, Lord of the Rings, Hamilton, Star Wars, Star Trek, the MCU, and the Wizard of Oz).

Circling back to Nihilism we discussed whether we are personally in a good position now to talk someone out of it, having read Chapmans extensive and reasonably complete rebuttals. Neither of us was especially confident, but perhaps it will take some practice. Daniel surprised me with a hot take: maybe meaning itself has become too reified. Maybe it is a lot more shallow than it seems, like beliefs and propositions. Maybe worrying about it is unnecessary? I have to admit I’ve been so steeped in the material it never occurred to me to consider this possibility.

Before we leave nihilism behind, I’m tempted to read this paper and its responses:

2022.07.30 S02E34


Thanks to @dglickman and Arizona for an engaging discussion as we pivoted from Nihilism to the Complete Stance. We started with a bit of a tangent, whether Chapman intentionally avoids whole categories of criticism by claiming not to be doing philosophy. This strategy has some famous antecedents, like the logical positivists discounting all of metaphysics as nonsense, or religious scholars claiming that logic has no bearing on their discourse. “You have no power here” :grinning:

Arizona decloaked when we starting started down the rabbit hole of the week: consciousness. I mentioned that I was disappointed (but not really surprised) that all the responses I saw to Nick Cammarata’s claim were in agreement with the sentiment that intelligence has nothing to do with consciousness:

In contrast, my current understanding is that consciousness is necessary to surpass a certain level of intelligence, perhaps mammalian level. I find friend-of-the-Stoa’s Frank Heile’s theory of consciousness quite compelling, which is itself predicated on the good regulator theorem from control theory, and attention schema theory:

Briefly, any control system that can control attention must have an internal model of attention, which is identical to consciousness. Simpler organisms like insects and worms are definitely aware of their environment, but they probably aren’t complex enough to have any sort of focus or attention, so there is no need to control attention. The ability to keep the sensory landscape relatively static and attend to different parts (like focusing on a sound or the periphery of the visual field) demonstrates control of attention. The corresponding attention model must necessarily contain the contents of attention, which is precisely what defines consciousness. We (everyone conscious) are a model of attention, in a model of the mind, in a model of self, in a model of the world, in the brain of an organism. So in that sense, it is true that the (conscious) self is an illusion because it is the map, not the territory.

Other related topics that came up:

2022.08.13 S02E35

We started with a discussion of some very exciting and related news: @Evan_McMullen and Matt Arnold announced they are organizing a gathering about metamodernity, sense-making, rationalism, metarationality, and thinking-about-thinking, for bloggers, their readers, and enthusiasts. Scheduled for mid- to late-2023, we have plenty of time to plan.

I found Jordan Hall’s response mildly amusing

Makes a bit more sense when TIL he had moved to Ecuador. We were wondering if he landed in startup city like Prospera which allowed me to mention my connection through being one of the main developers of Ulex open source legal system (law is code) which of course triggered Goedel’s Loophole:

Finally getting around to the main topic we agreed that the description of the completion textures did indeed make the complete stance sound very appealing. I can easily imagine aspiring to live in a mode of being characterized by those textures.

The six textures can be thought of as each leading to the next: wonder → curiosity → humor → play → enjoyment → creation. It is useful to understand this as a causal sequence. It is also useful to understand that it is not actually one.

Turns out we are all fans of a movie that captures the wonder of humanity and civilization:

In very tangentially related news:

2022.08.20 S02E36


Very enjoyable discussion with @dglickman that followed many tangents before getting to the complete stance, including the prospects of John Carmack’s new AGI company, the Vasserite and Zizian spin-offs of the BA rationalist community, and the recent controversy around Sam Harris.

I confessed I had some trouble understanding Chapman’s understanding of the concept of understanding. Chapman states:

An understanding is a way of being; nebulous but effective patterns of thinking, feeling, and interacting. An understanding is not a collection of statements that might be definitely true or false.

and in a footnote:

As a historical note, “understanding precedes representation” was a central slogan of Phil Agre’s and my work in artificial intelligence.

My own conception of understanding was that an agent could be said to understand something if it has access to an internal model that was sufficiently accurate to explain and/or predict the thing, which seems to require representation and contradicts what Chapman is claiming.

Daniel suggested looking at it from a developmental perspective, an infant human might gain some understanding of its world through interaction and experimentation before it has a cognitive model or the world, hence the understanding precedes representation.

Next I tried to imagine a physical model of “stance space” where an individual’s current stance moves between the complete stance and various confused stances. Chapman says that the confused stances are easy to attain but difficult to maintain while the complete stance is the opposite. Daniel helpfully suggested that the complete stance might be like a plateau with steep sides, difficult to climb to the summit, but easy to stay on the flat top once there. In this view the confused stances might be like low peaks or ridges, easy to climb to the top but also easy to fall off a side? Maybe? Needs more work.

We ended by discussing the possibility of a complete stance program prompted by my (lame) joke of joining a support group if you find yourself falling off the wagon into the confused stances again and again. We resolved to continue the discussion next time, but for next week we’ll work on noticing when we revert to confused stances and recall what strategies we took to return to a more complete stance.

2022.09.03 S02E37

all pages under:

Thanks to @dglickman for an enjoyable conversation about monism and dualism. We ended spending most of our time discussing ontology: do boundaries exist only in the mind? What is a mind anyway? In what sense do electrons exist? What are objects really?

Turns out we’re both partial to the (David) Deutschian view that something exists if it is an essential component of the best explanation of the subject at hand. Daniel attributed this view to Carnap, while Deutsch attributes it to Popper. I wondered if Popper knew Carnap and Daniel pointed out they were both part of the Vienna Circle. (I misheard this as “inner circle” and coincidentally the Vienna circle did indeed have an inner circle which include Canarp but not Popper.)

Definitely worth reading Nerst’s piece referenced by Chapman:

I took exception to Chapman’s definition of object:

So, intuitively, an object is a bunch of bits that are connected together, and aren’t connected to other things. The boundary of the object is where the connections stop.

… and pitched Daniel on my definition of object (part of my minimal ontology):

An object is a set of related properties.

I claim that my definition is more general, including not only physical objects but mathematical and programmatic objects. Daniel was not convinced though he said it “wasn’t terrible”. :laughing:

I mentioned that my copy of Active Inference arrived.

I’d like to add it to our reading lists along with Cantwell-Smith:

Daniel in his new location