Aubergine Society

2022.06.04 S02E26

First time in a long time we’ve had a full house with @Evan_McMullen, @dglickman, @Valeria, and @Sahil. We started by revisiting a topic from last week, the so-called Heinz dilemma:

A woman was near death from a special kind of cancer. There was one drug that the doctors thought might save her. It was a form of radium that a druggist in the same town had recently discovered. The drug was expensive to make, but the druggist was charging ten times what the drug cost him to produce. He paid $200 for the radium and charged $2,000 for a small dose of the drug. The sick woman’s husband, Heinz, went to everyone he knew to borrow the money, but he could only get together about $1,000 which is half of what it cost. He told the druggist that his wife was dying and asked him to sell it cheaper or let him pay later. But the druggist said: “No, I discovered the drug and I’m going to make money from it.” So Heinz got desperate and broke into the man’s laboratory to steal the drug for his wife. Should Heinz have broken into the laboratory to steal the drug for his wife? Why or why not?

It seems that none of us would fault Heinz for stealing the drug, but we differed on what would be an appropriate response. I suggested that Heinz is merely in debt to the druggist for $2000 plus damages for the break-in. Maybe he could raise the money after the fact, or work it off or something. Evan suggested that Heinz might negotiate a deal where he didn’t go to the news outlets with the story of how the druggist was being a dick. Sahil objected to calling the druggist a dick on the grounds that society should not expect everyone to be in a position of taking on counterparty risk (for example).

We moved on to this week’s readings with a discussion of the harm of eternalism (“show us on this doll where eternalism touched you”). Valeria surprised us (or at least me) by declaring the a cosmic plan exists, it just seems otherwise at times because we can’t understand it. I wasn’t sure if she really believed this or was just playing devil’s advocate, and she remained coy. On a related note, Daniel observed that there is a difference between being able to conceive of something and that same thing being possible, e.g. p-zombies.

We pivoted to discussing a question Sahil asked in Discord last week:

Here’s a fun question: if you had to write an epistemic status for meaningness posts, what would it look like?

Evan suggested that Meaningness taken as a whole doesn’t really have an epistemic status, rather it should be viewed as a design pattern.

We did an experiment for Valeria, who asked if we could could parse this quote without rereading (I, for one, could not)…

The inquiry into religion attempted here proceeds by way of problems judged to lie hidden at the ground of the historical frontier we call “the modern world”.

It was from this book:

The deep dive this week was a discussion on the meaning of meaninglessness, and how some drugs can turn up the experience of deep meaning without affecting much else. Evan offered a quote from the Glass Bead Game that capture this feeling:

I suddenly realized that in the language, or at any rate in the spirit of the Glass Bead Game, everything actually was all-meaningful, that every symbol and combination of symbol led not hither and yon, not to single examples, experiments, and proofs, but into the center, the mystery and innermost heart of the world, into primal knowledge. Every transition from major to minor in a sonata, every transformation of a myth or a religious cult, every classical or artistic formulation was, I realized in that flashing moment, if seen with truly a meditative mind, nothing but a direct route into the interior of the cosmic mystery, where in the alternation between inhaling and exhaling,

We finished with a discussion of whether professional ethicists were more ethical than average (research says probably not):

2022.06.11 S02E27

@Evan_McMullen mentioned that he had recently met up with Matt Arnold who produces the audio version of meaningness.com:

Rumor has it that Chapman may attend a meaningness meetup in Detroit this fall. Seems like an excellent opportunity for the Aubergine Society to convene in person.

We discussed our favourite eternalist ploys, including smearing (not to be confused with schmearing) and kitsch.

Riffing on what Chapman said:

I am unsure about my current list of ploys. They seem to overlap and run into each other somewhat, and I also expect I may find more of them. I may need to “refactor” the categories. Feedback about this would be welcome!

I proposed a new ploy that is kind of a combo of pretending and colluding, namely LARPing. While I conceded to Evan that most actual LARPers are self-aware, I contend that most Eternalists are self-aware on some level, at least they act as if they are.

I wondered if QAnoners were LARPing or actually insane. Evan said both and recommended a documentary:

After reading through all the ploys I noted two observations:

  1. Each ploy can be seen as increasing stupidity
  2. By the end I no longer saw the appeal of Eternalism. It would be bad if Eternalism of any kind was true.

Two explain my latter claim, I tried to make an analogy with math and Goerdel’s incompleteness theorem. I suggested it would be bad for math if it was actually as simple as deriving everything mechanically from a few axioms (at least from a meaning perspective).

I tried to bolster my claim that mathematics has a deeply embedded randomness against @dglickman 's objections by citing Chaitan’s Meta Math! book, but it has been too long since I read it remember the arguments, alas.

We took a bit of detour into text compression as AI, and intelligence as compression. I got an opportunity to link my OEIS sequence and mention my first AI prof:

2022.06.18 S02E28

The Aubergine Society welcomed newcomer John to the meeting. We started by discussing the apparent recent invasion of Nihilists in the comments. It looked like a reddit brigade but I was unable to find the source. Chapman was forced to close comments on the page, I suspect for the first time ever.

We revisited the meaning of “real”, prompted by this Chapman quote:

This is also wrong; nebulous meanings are “real,” for any reasonable definition of “real.”

I made a case for the David Deutsch view (likely inherited from Popper), that something is real if and only if it figures in your best explanation. We discussed some of the implications, like the reality of entities can change across time and people.

Valeria brought up this classic dialog from The Matrix:

Agent Smith : Why, Mr. Anderson? Why, why? Why do you do it? Why, why get up? Why keep fighting? Do you believe you’re fighting… for something? For more than your survival? Can you tell me what it is? Do you even know? Is it freedom? Or truth? Perhaps peace? Could it be for love? Illusions, Mr. Anderson. Vagaries of perception. Temporary constructs of a feeble human intellect trying desperately to justify an existence that is without meaning or purpose. And all of them as artificial as the Matrix itself, although… only a human mind could invent something as insipid as love. You must be able to see it, Mr. Anderson. You must know it by now. You can’t win. It’s pointless to keep fighting. Why, Mr. Anderson? Why? Why do you persist?
Neo : Because I choose to.
Agent Smith : Wait. I’ve seen this. I stand here, right here, and I’m supposed to say something. I say, “Everything that has a beginning has an end, Neo.”
[pause]
Agent Smith : What? What did I just say?

The deep dive of the week was prompted by a recent AI story in the news: What is sentience and how can it be detected?

No conclusions, but in the end I was made to feel slightly bad for torturing a simulation of a thermostat in my mind. :face_with_spiral_eyes:

New from Vervaeke:

I recommend Jake Orthwein attempting to explain Chapman to a fellow critrat:

Daniel mentioned Karl Friston has a new book out:

TIL 2 new words: saudade (h/t Valeria) and Mitfreude (h/t John)

Obligatory nihilist scene from The Big Lewbowski…

This may be a good selection to discuss when we are finished with Meaningness. My copy is already on its way.

2022.06.25 S02E29

Much of the discussion today (with @Evan_McMullen, @dglickman, and John) revolved around the neurochemical basis of meaning and the potential for psychoactive mediation. If a drug like 5-MeO-DMT can enhance meaning (at least along one dimension, Evan was careful to tease apart significance and motivation in meaning), then are their other drugs that have the opposite effect, leading to nihilism? Almost certainly.

Friend of the Stoa, Andrés Gómez Emilsson is doing some very interesting related work at QRI:

Some of Andrés’s ideas about treating catatonia reminded me of the Robin Williams movie based on the Oliver Sacks book:

Though the new renaissance in psychedelics is somewhat encouraging, we revisited the topic of the danger of gurus, and the practice of medicalizing transformative experiences, mentioning Ram Dass as an example.

The discussion of black magic led to a brainstorming on what counts as modern-day magicians. I suggested software programmers spend their time figuring out arcane incantations in order to invoke real-world results. Evan and Daniel agreed modern fab units like ASML come close to magic.

Drawing upon D&D magic user specialties, we might consider movie makers to be master illusionists, and entrepreneurs as conjurerers.

2022.07.02 S02E30

Just me and @dglickman holding down the fort this week. I started by mentioning the other book club I’ve been attending the last couple months, Foresight Institute’s

I attempted to explain Robin Hanson’s grabby aliens model:

In one of his original posts on the topic Hanson says:

It looks like there is a non-trivial chance that we here on Earth will give birth to such an GC near here. And soon. (Say within a million years.)

We discussed the mindset of longtermism that would consider a million years “soon”, and its discontents, nobably:

This led to a long tangent on the philosophy of discount rates. My theory is that discount rates encode uncertainty which necessarily increases with time. Daniel disagreed and had some good counterpoints. Drawing on a vague memory, I mentioned that studies showed that humans tend to implicitly use a hyperbolic discount rate whereas an exponential one would be ideal. Daniel asked a good question, who can say what is ideal here? After a couple false starts, I suggested that perhaps a simulation of an ecosystem containing populations of interacting hyperbolic and exponential discounters might show the latter win in the long run. We discussed whether rationalists actually do win generally, considering recent revelations from the BA rationalist community.

I spent a bit of time waxing nostalgic about my participation in nihilist-adjacent subcultures, various mixes of punk, goth, and industrial scenes in the 90s and 00s.

Based on Chapman’s reco we considered adding Camus’s The Rebel to the reading list:

Finally we practiced a bit of nihilizing with a discussion of recent American political events, from J6 to the overturning of Roe v Wade just last week. While I think SCOTUS was on firm legal grounds there, I confessed to being a pro-choice extremist, believing that the mother has absolute authority over the life of the pre-natal human for as long as they share a blood supply through the umbilical cord. Concerning J6, I didn’t mention it in the meeting but I consider an unarmed insurrection to be an oxymoron.

2022.07.09 S02E31

Quite a wide-ranging discussion thanks to @Evan_McMullen and @dglickman, occasionally touching on the topic of nihilism like a flat stone skipping across a still pond.

Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not all conflicts are merely misunderstandings.

We started with this quote from Sarah Constantin providing an example of something insightful from Michael Vassar as requested by Wei Dai prompted by Anna Salamon’s reply to Scott Alexander’s post (Rule Thinkers In, Not Out - LessWrong) Evan suggested the current mental health crisis could be viewed as demon possession using this sense of “evil”.

Apparently mathematicians that delve deeply into the Continuum Hypothesis tend to go insane, which reminded me of this classic textbook intro:

Since Nietzsche is Chapman’s favorite philosopher we discussed which would be the best book to include in this salon. Contenders:

We agreed Heidegger would probably be better if he had written in English, but perhaps even better if written in latin, old norse, or sanskrit. This led to a long tangent on the differences and merits of various human languages.

ob.LW

Good discussion of extremely ancient cities and Graeber’s last book:

Wokeism as a new religion and Bogghosian’s Great Realignment theory:

The rabbit hole of the week was around the notion of Kyriarchy and its relation to Western Civ/Anglosphere/Game A as a self-terminating strategy…

Are chakras real? Evan had an interesting theory of chakras as kind of a communication mechanism between low-level subconscious processes and consciousness using the stable body map part of the mind.

Just to note an idea from Evan: start a trend on twitter where we list our top 10 Stoa sessions as a youtube playlist. TBD

2022.07.16 S02E32

to

Before @Evan_McMullen could join while on the road in PA, @dglickman and I were discussing the latest Making Sense podcast (general pessimism about the near future)

I mentioned Brett Hall (ToKCast) was coming out with a rebuttal from the general optimism perspective:

When Evan joined we quickly pivoted to Roko’s basilisk and I was surprised to learn that the LW elites still take it quite seriously despite the damage control discounting the matter when it was widely mocked by Rationalist detractors.

I noted that when I originally heard of the Basilisk, it immediately reminded me of the basic contours of the Judeo-Christian god (i.e. act as if you believe and obey me now to avoid infinite punishment later). We spent a bit of time trying to come up with reasons why the Christianity egregore might have been a net positive on civilization but eventually gave up.

Evan recommends an alternate history book:

We rounded out the discussion, tying it back to nihilism, talking about how EA seems designed to generate meaningfulness for secular rationalists (and how no one should be surprised that it turned into a bit of a cult), and whether suicide could be considered a perfected form of nihilism (fair arguments on both sides of that claim).

2022.07.23 S02E33

to

Thanks to @dglickman for joining the final session on the Nihilism section. We started by trying to answer a relatively easy metaphysical question: Why is there something rather than nothing? Though we disagreed on the response, we did agree to reject the options. For me, it is not an either/or question, but rather a both/and. We do indeed observe something in our timeline, but I suggest that our understanding of physics means that there are alternate Everettian timelines with nothing (either because of different physical laws or different initial conditions, some non-zero measure of alternate timelines have no matter or energy or space or time, and are therefore indistinguishable from nothing).

From there, we transitioned to Occam’s razor as a method of model selection, and then did a deep dive into pop culture and TV tropes and modern myths (including Westworld, For All Mankind, Stranger Things, The Sopranos, Game of Thrones, Lord of the Rings, Hamilton, Star Wars, Star Trek, the MCU, and the Wizard of Oz).

Circling back to Nihilism we discussed whether we are personally in a good position now to talk someone out of it, having read Chapmans extensive and reasonably complete rebuttals. Neither of us was especially confident, but perhaps it will take some practice. Daniel surprised me with a hot take: maybe meaning itself has become too reified. Maybe it is a lot more shallow than it seems, like beliefs and propositions. Maybe worrying about it is unnecessary? I have to admit I’ve been so steeped in the material it never occurred to me to consider this possibility.

Before we leave nihilism behind, I’m tempted to read this paper and its responses:

2022.07.30 S02E34

to

Thanks to @dglickman and Arizona for an engaging discussion as we pivoted from Nihilism to the Complete Stance. We started with a bit of a tangent, whether Chapman intentionally avoids whole categories of criticism by claiming not to be doing philosophy. This strategy has some famous antecedents, like the logical positivists discounting all of metaphysics as nonsense, or religious scholars claiming that logic has no bearing on their discourse. “You have no power here” :grinning:

Arizona decloaked when we starting started down the rabbit hole of the week: consciousness. I mentioned that I was disappointed (but not really surprised) that all the responses I saw to Nick Cammarata’s claim were in agreement with the sentiment that intelligence has nothing to do with consciousness:

In contrast, my current understanding is that consciousness is necessary to surpass a certain level of intelligence, perhaps mammalian level. I find friend-of-the-Stoa’s Frank Heile’s theory of consciousness quite compelling, which is itself predicated on the good regulator theorem from control theory, and attention schema theory:

Briefly, any control system that can control attention must have an internal model of attention, which is identical to consciousness. Simpler organisms like insects and worms are definitely aware of their environment, but they probably aren’t complex enough to have any sort of focus or attention, so there is no need to control attention. The ability to keep the sensory landscape relatively static and attend to different parts (like focusing on a sound or the periphery of the visual field) demonstrates control of attention. The corresponding attention model must necessarily contain the contents of attention, which is precisely what defines consciousness. We (everyone conscious) are a model of attention, in a model of the mind, in a model of self, in a model of the world, in the brain of an organism. So in that sense, it is true that the (conscious) self is an illusion because it is the map, not the territory.

Other related topics that came up:

2022.08.13 S02E35

We started with a discussion of some very exciting and related news: @Evan_McMullen and Matt Arnold announced they are organizing a gathering about metamodernity, sense-making, rationalism, metarationality, and thinking-about-thinking, for bloggers, their readers, and enthusiasts. Scheduled for mid- to late-2023, we have plenty of time to plan.

I found Jordan Hall’s response mildly amusing

Makes a bit more sense when TIL he had moved to Ecuador. We were wondering if he landed in startup city like Prospera which allowed me to mention my connection through being one of the main developers of Ulex open source legal system (law is code) which of course triggered Goedel’s Loophole:

Finally getting around to the main topic we agreed that the description of the completion textures did indeed make the complete stance sound very appealing. I can easily imagine aspiring to live in a mode of being characterized by those textures.

The six textures can be thought of as each leading to the next: wonder → curiosity → humor → play → enjoyment → creation. It is useful to understand this as a causal sequence. It is also useful to understand that it is not actually one.

Turns out we are all fans of a movie that captures the wonder of humanity and civilization:

In very tangentially related news:

2022.08.20 S02E36

to

Very enjoyable discussion with @dglickman that followed many tangents before getting to the complete stance, including the prospects of John Carmack’s new AGI company, the Vasserite and Zizian spin-offs of the BA rationalist community, and the recent controversy around Sam Harris.

I confessed I had some trouble understanding Chapman’s understanding of the concept of understanding. Chapman states:

An understanding is a way of being; nebulous but effective patterns of thinking, feeling, and interacting. An understanding is not a collection of statements that might be definitely true or false.

and in a footnote:

As a historical note, “understanding precedes representation” was a central slogan of Phil Agre’s and my work in artificial intelligence.

My own conception of understanding was that an agent could be said to understand something if it has access to an internal model that was sufficiently accurate to explain and/or predict the thing, which seems to require representation and contradicts what Chapman is claiming.

Daniel suggested looking at it from a developmental perspective, an infant human might gain some understanding of its world through interaction and experimentation before it has a cognitive model or the world, hence the understanding precedes representation.

Next I tried to imagine a physical model of “stance space” where an individual’s current stance moves between the complete stance and various confused stances. Chapman says that the confused stances are easy to attain but difficult to maintain while the complete stance is the opposite. Daniel helpfully suggested that the complete stance might be like a plateau with steep sides, difficult to climb to the summit, but easy to stay on the flat top once there. In this view the confused stances might be like low peaks or ridges, easy to climb to the top but also easy to fall off a side? Maybe? Needs more work.

We ended by discussing the possibility of a complete stance program prompted by my (lame) joke of joining a support group if you find yourself falling off the wagon into the confused stances again and again. We resolved to continue the discussion next time, but for next week we’ll work on noticing when we revert to confused stances and recall what strategies we took to return to a more complete stance.

2022.09.03 S02E37

all pages under:

Thanks to @dglickman for an enjoyable conversation about monism and dualism. We ended spending most of our time discussing ontology: do boundaries exist only in the mind? What is a mind anyway? In what sense do electrons exist? What are objects really?

Turns out we’re both partial to the (David) Deutschian view that something exists if it is an essential component of the best explanation of the subject at hand. Daniel attributed this view to Carnap, while Deutsch attributes it to Popper. I wondered if Popper knew Carnap and Daniel pointed out they were both part of the Vienna Circle. (I misheard this as “inner circle” and coincidentally the Vienna circle did indeed have an inner circle which include Canarp but not Popper.)

Definitely worth reading Nerst’s piece referenced by Chapman:

I took exception to Chapman’s definition of object:

So, intuitively, an object is a bunch of bits that are connected together, and aren’t connected to other things. The boundary of the object is where the connections stop.

… and pitched Daniel on my definition of object (part of my minimal ontology):

An object is a set of related properties.

I claim that my definition is more general, including not only physical objects but mathematical and programmatic objects. Daniel was not convinced though he said it “wasn’t terrible”. :laughing:

I mentioned that my copy of Active Inference arrived.

I’d like to add it to our reading lists along with Cantwell-Smith:


Daniel in his new location

2022.09.10 S02E38

to

We were delighted to welcome back @Valeria after a lengthy hiatus. :slight_smile: After chatting a bit about Vervaeke’s appearance on the Lex Fridman podcast we discussed Chapman’s strange obsession with aardvarks and tarantulas.

I confessed that when reading the case for selflessness I heard it in the void of Sam Harris since he brings it up so often on his podcast. I think I even mentioned that I started tuning him out when he does that, and yet not one hour later I was listening again to Sam going on at length to explain his position in his latest episode.

To be fair, this time he clarified that there are many definitions of self that he does concede exist. It is just the one we associate with the subject of experience that he thinks is an illusion. That refinement may deflect my argument somewhat, that the existence of other minds implies the existence of self if the self is just one mind’s model of itself.

Valeria mentioned that watching a lot of Chinese dramas gives her the sense that that culture has quite a different sense of self than Western cultures, much more collective. For example, the notion of collective guilt is taken for granted, and it makes sense for one individual to take the punishment for the group even if they were not directly involved in any transgression. That notion seems not only counter-intuitive to my sensibilities but borderline appalling. As an aside, Valeria mentioned that a common trope in Chinese dramas was for one person to give another person their eyes, which led to an interesting tangent on tropes and superpowers.

Even though the page “A billion tiny spooks” is marked as a stub, Daniel and I found plenty of material to disagree with.

One example, Chapman’s characterization of cognitivism:

To acknowledge and include cognitivism—the doctrine that people have beliefs, desires, and intentions (not merely dispositions and behaviors).

In my view, beliefs and desires are models of dispositions and behaviors. It isn’t really a choice between the two. I have to admit I was very influenced by Minsky. I think I read Society of Mind when it was new in the 80s.

This led to an interesting tangent on IFS

Valeria finds it quite useful, but agrees that the parts are created (not discovered) by the practice. I think she agreed with Daniel that it can be risky to fixate parts. I drew a comparison to the strange new subculture on Tiktok around DID

To bring things back to the beginning of the meeting, I mentioned that I found Vervaeke’s description of wisdom very reminiscent of Chapman as a kind of meta-rationality or a way of applying the right kind of rationality to a particular situation. The video link above is queued to the beginning of that section.

2022.09.17 S02E39

and all sub-pages…

Though neither Daniel nor I have direct experience with the Mission confused state, we did observe it among EA adherents. This was confirmed by @Sahil when he joined a bit later. The EA group houses seem to have a bit of a monastic vibe. I was a bit skeptical when Daniel suggested actual Buddhist monastics might be on a Mission, but now can confirm, e.g. from the Willow Monastic Academy:

Our environmental and social ecosystems are in peril. Between a worldwide pandemic, an increasing crisis of loneliness and mental health, compounding climate catastrophes, and a media landscape that prioritizes capturing our attention over providing truth, it has become increasingly clear that our current way of doing things is unsustainable . How do we emancipate ourselves from these harmful systems and become the kinds of leaders the world desperately needs?

We talked about BS jobs and I worried half-jokingly that I might be in one now.

Theoretically the market should eliminate them, it doesn’t make sense to pay an employee more than they contribute in value to the organization. But experience shows that real companies are not ruthlessly efficient, they sometime earn enough, and accumulate sufficient bureaucracy to have negative-value employees on staff, it just isn’t worthy their time to track them down and replace them, especially when it is so difficult to measure how much they are actually contributing.

We have questions…

I once acted as a business consultant to Fifi, who had decided that her mission in life was to create the world’s first mobile beauty spa.

Who is Fifi?!

We have a new top contender for our next selection after running out of meaningness.com pages:

In other news, apparently Wolf is not a fan of AI…

We all heard Sahil promise to join us at the Fluidity Forum next year, right? :slight_smile:

2022.10.01 S02E40

to

@dglickman and I began with a discussion of the latest map of the liminal web space generated by the Fluidity Forum poll. Very nice to see The Stoa taking a prominent position in the middle of the Great Tertiary Layer. I forget how we got on the topic of coffee, but we talked about how coffee houses were once the hotbed of philosophy in the 18th century and I mentioned that Peter Limberg had similar aspirations for The Stoa, that one day it would be a real cafe hosting regular philosophy meetups. Maybe some day :crossed_fingers:

Daniel asked if I was familiar with Decoding The Gurus, and as it so happens I had just listened to my first episode, number 55 with everyone’s favorite sense-makers in Game B.

I don’t recall hearing that level of savage criticism since Brent Cooper’s famous rant

Looking back on our own views on Game B, we’re both quite a bit more cynical now. Too much talk, not enough action. It seemed like everyone really wanted to like the Initiation to Game B video, and publically praised it while suppressing any thoughts of cringe. Perhaps a case of preference falsification?

To be fair, Schmactenberger did launch the Consilience Project, and The Future Thinkers did launch their Smart Village so I’m willing to suspend judgment for a while.

We went on a fairly long tangent about fame, wondering why anyone would want to desire it. With the atomization of media and culture in the last few decades, it seems like fewer people might be world-famous, or even nationally famous.

Finally we talked about sacredness, and the notion that “everyone worships something”. I offered my view that what we hold sacred is the belief that is closest to our core. Beliefs held far away are held lightly and easily given up, while one’s close to our imagined centers are more highly valued. The ones that we give up last are sacred. For me, it is logic that I hold sacred, but everything else I value depends on it. Daniel is going to give it some thought for next the next meeting.

2022.10.08 S02E41

to

@Evan_McMullen was joyfully welcomed back after a bit of a hiatus, and briefed Daniel and me on his recent projects including planning for the Fluidity Forum (most likely happening around this time next year) and his new series on the Stoa:

I noted a synchronicity with eigenrobot’s latest revelation on the reality of egregores which I posted to the IDM channel for the explicit triple

Territory; Mapper; Map.

Chapman’s mapping between generations and modes resonated with me, I feel at home in the subcultural mode (RPG/fandom nerd in the 80s, goth/industrial cyberpunk in the 90s). It also tracked with Evan’s and Daniel’s respective self-id with transitional cohorts Xillennials and whatever they call the one between Millenials and Zoomers (Milloomers? :joy:) on the cusps between subcultural and atomization and fluid modes. Discussing how the analogy breaks down led to an interesting tangent on leftist critiques of metamodernism and Game~B culminating in an epic shitshow of a “debate” on the Stoa between Brent Cooper and Jordan Hall about Brent’s rant:

TIL Evan was on the pro MTG tournament circuit around the same time as Zvi

I was confused about Chapman’s characterization of vampires as symbolizing incoherence where I thought of them more as symbolizing parasitism if anything. Evan helpfully explained, and managed to tie it into his Stoa series thesis of replicating memetic parasites employing Girardian scape-goating as a crypsis strategy. Evan presented the outlines of his theory that trauma support groups tend to cause more trauma with their mantra of “hurt people hurt people”.

We ended with a wide-ranging discussion of global civilizational trends and possible futures invoking Jared Diamond’s theory:

2022.10.15 S02E42

to

We welcomed back @red_leaf after a lengthy hiatus, and used the opportunity to do a bit of a retrospective on meaningness.com overall. We generally agreed with the project, that Chapman is right to reject both eternalism and nihilism, and there is a better 3rd way. Same for monoism and dualism. Our criticisms were relatively minor, mostly along the lines of the models being too simplistic. @dglickman agreed that Chapman would probably agree with that. Particularly on the application of Kegan stages to societal modes, it is likely more accurate to imagine a society at any particular time described by a histogram of the population in the different stages, and further, that each individual in the population as a histogram of stages.

mentioned, possibly recommended…

We pivoted from the main material to discuss Evan’s ongoing Stoa series…

Though we were all mostly on board with the main thesis (intersubjective parasites, aka egregores, as a significant factor in societal dysfunction), we have many remaining questions about the agency of these theoretical entities. I proposed that the closest analog in biology might be something like a species rather than an organism. If so, it is difficult to imagine how they could have agency in the same sense as organisms with preferences and beliefs. Hopefully, we will get more answers and clarifications in the 3rd session.

Returning to the material, we discussed invented traditions and timeworn futures and had some fun trying to think of Christmas carols that dated back further than the mid-20th century.

2022.10.22 S02E43

and

@dglickman and I kicked things off with a discussion of how the more recent decades (the 00s and 10s) were not yet as distinct as previous ones like the 60s, 70s, and 80s. We expect they will acquire their own distinct character in time since they both saw the mass adoption of significant technological waves (the internet and the smart phone respectively). I took the opportunity to regale Daniel with old-timey stories about the times before the personal computer.

It is difficult to compare the revolutions of the early 21st venture (mostly technology) with the revolutions of the early 20th (in physics and math). Did someone that lived from say 1880-1950 see more and larger changes in their lifetime than someone who lived from say 1950-2020? I tried to imagine what would be like to live through fundamental changes in physics in the near future. What could possibly change our commonly understood ontologies? Maybe if scientists were able to prove that DMT elves were real? :laughing:

We discussed Hanson’s Great Filter and the likelihood that it could happen in our personal lifetimes with respect to nuclear war or unfriendly AI. Daniel noted that nuclear war was unlikely to wipe out our species. Granted, but it could keep setting us back so we never get off the planet.

I saw a relevant tweet today:

The rapid advances in AI are now part of the zeitgeist, it seems most people are taking note. I have a personal example. My friend Agah was planning to publish a book based on his Neohuman podcast but gave up on the project because the automatic transcription software made too many errors. This was way back in July. Now 3 months later I was able to transcribe the episodes easily and accurately with OpenAI Whisper.

Daniel raised the possibility that this latest wave of advances propelled by deep learning and LLMs and transformers might run out of steam before we achieve human-level AI. What are we still missing? According to David Deutsch (and by extension, Brett Hall), the missing ingredient is creativity. AIs only do what we tell them. Like all software AIs just follow instructions mindlessly.

I’m not convinced. I don’t think creativity is that mysterious. In the Popperian model, all knowledge is generated by a process of conjecture and refutation. This can be mapped directly on the evolutionary process of variation and selection. The creative part of the process (conjecture) is just variation happening in the mind, and almost entirely at the subconscious level (I suggest). In this view creativity is the same process as knowledge production, but just at earlier cycles and lower levels, and mostly subconsciously.

2022.10.29 S02E44

to

@dglickman joined me for an abbreviated session while he was in transit on the train, sadly the connection was quite choppy but still remarkable that he was able to join at all. We discussed Chapman’s model of the two counter-cultures, the hippies and the moral majority. While Daniel has encountered only members of the former, I’ve had the dubious pleasure of living for a couple of years embedded in the religious right when I lived in Tulsa OK next to ORU in the late 70s. Strange how that era is already largely forgotten.

We discussed Ken Kesey and his Merry Prankster, and the Electric Kook-Aid Acid Test, and decided that peak hippy likely happened in 1968 between the Summer of Love (67) and Woodstock (69).

TIL Ken Kesey was the author of One Flew Over The Cuckoo’s Nest

2022.11.12 S02E45

to

@dglickman joined me again after a break last week. I started off with a confession that I was a bit surprised (though on reflection, it seems obvious) that the “traditional” family with parents and children living together has only been around since the 1800s. Daniel pointed out it is difficult to think of any modern institutions that have been around for much longer. I suggested maybe going to church on Sundays, and while true, it is not at all the same experience as pre-modern times. We talked about how modern Christianity, especially liberal protestant denominations like the one I was raised in, are barely Christian at all.

On the topic of both counter-cultures being related in their rejection of rationality, Daniel observed that they only rejected it in certain circumscribed areas while being mostly rational. I attempted an analogy with genetics, like how we share on average 50% of our genes with siblings while sharing something like 99% with chimps. Those facts can be reconciled by realizing that the 50% we share with siblings is after ignoring all the genes we share with other humans.

Now that Evan’s Trolls series is over, we can ask the question, do the intersubjective parasites have agency? I believe Evan’s answer was that it was useful to assign them agency to better deal with them even if they don’t have real agency.

We went down a bit of a rabbit hole discussing the nature of distributed intelligences, from ant colonies to modern corporations. I don’t think corporations have agency like humans but it is certainly useful to talk about them as if they have beliefs and goals.

Somehow a tangent on the latest FTX scandal led to Daniel recommending Alex Kuschuta’s podcast.