3. Utopia

I’ll call my project an utopia then, even if the term has historical baggage, or precisely because some of the baggage makes sense to me to adopt.

With the risk of alienating some of my better educated readers (just groan and move on; it won’t last long) I have to clarify that utopia means not-a-place and not a beautiful place; the latter would be eutopia, which is another word that Thomas More used. The two fused at some point, and utopia ended up with the meaning of eutopia, in particular as opposed to dystopia. I think this ambivalence may be a good thing, because as I said earlier, I’m unsure if what I’m doing is the right thing or it will work out at all. Perhaps someday someone better than me will actually produce an eutopia out of their utopia; until then, I see an utopia as a plan and an intention, an unstable arrangement that must resolve into eutopia or dystopia or just nothingness eventually.

When we think of the world we do it through models, even when we’re not aware of it; the world in our minds is not the real world, but it resembles that of the real world in enough ways to be useful (evolutionarily, socially).

What I’m interested in is not only the end state, but also a way of potentially getting there. For this purpose, I plan to build an explicit model of the world, and of the utopia, or both at once; this is, I know, a crazy undertaking, but in failing I will succeed according to my definitions.

Thus I will do the following: start an open source project in github, choose a language (possibly Python), and get on it — design an architecture to model my project. If you don’t know what I’m talking about specifically, don’t worry about it: it means I intend to write part of this model as code, and part as human readable text; hopefully over time the two parts will converge. Allow me to explain further:

I will model objects in a world — a very limited world to begin with; think of a grid. Objects will have properties and may react to the environment; some of the objects will also be agents, in the sense that they will act of their own volition (for some definition). I expect each object that acts o reacts to the environment to look like this:

  • Python code that implements actions. This is, basically, hardcoding of behaviour that humans actually sit down and write for the object in question.
  • Neural networks that implement object-specific actions, or aid in the implementing of actions: pattern recognition with a convnet, action suggestion with deep Q networks trained with RL, natural language generation using transformers, etc.

We’ll also need a protocol for objects to interact. Nothing necessarily fancy — perhaps core Object Oriented concepts built into Python will suffice for a while. If we ever get to model humans (to any extent, even if risible) I could see having them using natural language to communicate directly; then they would be something like chat bots who also have some awareness of an environment. It should be fun.

Eventually I’d like to apply ML to the hardcoded facet of agents; think genetic algorithms that introduce code mutations and evaluate fitness of the resulting agents.

This is a naive simulation; it may, at best, top out at the stage of making a reasonably fun game (for nerds like me). Simulating intelligence has been tried many times before, starting in the 60s (and way before, if you count non-implemented projects); it’s always failed, of course. I could see it being more successful now, to some degree, by virtue of us having access to more advanced deep learning architectures now. But for all I know this has also been tried thousands of times already.

So why do it?

Well, I think it’s a useful exercise, even if it won’t work for any useful definition of the word. We can learn something from it. And we can use it as a model to learn to think in new ways of the world.

I plan to call the codebase, this naive simulation framework and implementation, utopia.