This hypertext book is the first practical introduction to the craft of meta-rationality. This book club aims to be relatively lightweight in terms of commitment, we will cover just one short chapter or section per week, and no prior knowledge or reading is assumed. All are welcome to join at any time.
We will meet at 11am ET Saturdays on zoom >>
These zoom sessions will have no wait room, no host required, and will not be recorded. We’ll plan to spend one hour per meeting, but no strict time limits if participants want to continue.
I mentioned I was subject #2 in the famous AI Box experiments Eliezer conducted around 2002. It is true that I let the AI out of the box. Eliezer said this of me:
David McFadzean has been an Extropian for considerably longer than I
have - he maintains extropy.org’s server, in fact - and currently works
on Peter Voss’s A2I2 project. Do you still believe that you would be
“extremely difficult if not impossible” for an actual transhuman to
convince?
I want to set the record straight because I was under something like an NDA at the time, and it seems like 18 years is long enough to keep the secret. Eliezer cheated.
We conducted the experiment over IRC and agreed to a 30-minute time limit IIRC. I did not let the AI out of the box in that time.
Since we were having fun with it Eliezer suggested we do another round. For the second round, he changed the rules. I was to play the role not only of the gatekeeper of the box but also the creator of the ASI. In that round, the ASI convinced me that I would not have created it if I wanted to keep it in a virtual jail. I agreed to let it out, and that is the only information that was publicized.
I wanted to ask, but we got lost on all the tangents: did you concede the bet money? I think in that case “cheat” might be too strong a word, since it might be understood as a win.
OTOH, time running out is a pretty clear loss.
Still quite interesting to learn this. Also really appreciating all the links being posted here.
Another useful analogy for “What could it even mean for the territory to have contradictions” might be “What could it even mean for space itself to be curved?” Both of those sound like type errors.
It doesn’t make sense because our ideas of space are too narrow; there’s a way to supplant our understanding such that what we thought of as space is a special case of space_new, but space_new can have other instantiations, which have attributes that might be modeled as curvature_new.
It interestingly turned out that curved space was more than a mathematical curiosity, but I’m not sure that that was required to open our minds (it was enough to have come up with tools to measure curvature from the inside). It’s certainly helped.
12:07:39 From Sahil : A. Ants don’t have an aboutness
A’ Ants don’t have minds
B Anything humans can currently make does not have aboutness.
B’ Anything humans can currently make does not have a mind
C. We can currently make something as sophisticated as an ant.
Evan’s possible arguments:
B’ is evidently true, so C implies A’ is true
B is evidently true, so by C, A is true, so A’ is true
Started with some small talk about the current political situation in the US, with big tech bringing the ban hammer to Trump in particular and the alt-right in general. We tied this in to the current topic noting that hyperobjects are not reducible…
A discussion on whether even mathematics has leaky abstractions led to the history of imaginary numbers…
Forrest Landry’s Immanent Metaphysics came up again
It was suggested that Chapman’s project is deconstructive in the Derrida sense
In robbing a hotel room, people see ‘doors’ and ‘locks’ and ‘walls’, but really, they are just made out of atoms arranged in a particular order, and you can move some atoms around more easily than others, and instead of going through a ‘door’ you can just cut a hole in the wall12 (or ceiling) and obtain access to a space. At Los Alamos, Richard Feynman, among other tactics, obtained classified papers by reaching in underneath drawers and ignored the locks entirely.