I just started reading Marvin Minsky's "Society of Mind". It's a classic in the field of AI, from what I've read about it. I'll try to post notes and potentially commentary about it in this page.
One thing I like so far is that it does not shy away from touching on the most philosophical or profound, and sometimes mentions the Big Questions -- although it also makes it clear that answering those is not in scope for the work, and may even be impossible.
The agents of the mind
Minsky characterizes the mind as essentially a distributed system made of simple parts that run simple programs -- agents. Agents can be subordinate to others, which yields a tree (he implies there's some further interaction that could turn this into a graph, but doesn't go into much detail for now). I liked how he pointed out that every agent, and in particular the base ones, should be simple; if one ends up with an agent that seems "smart", one has only succeeded in hiding consciousness/intelligence in yet another black box.
Easy things are hard
This part is classic; I've gotten the same idea from elsewhere (perhaps Wikipedia, perhaps some talk) so it's become sort of canon (or a cliché?). The core idea is that one tends to underestimate the complexity of things that are relatively intuitive, like movement, and overestimate others like analytical thinking. Many intuitive behaviours are actually the crystallized form of a very complex behaviour that one acquired (in many cases painfully) during childhood. This also goes for basic reasoning that one now thinks about as "common sense".
Conflict and bureaucracy
He points out further that the agent tree is shaped in many forms like a human bureaucracy -- say, a company. The implied hierarchy of agents has a part in resolving conflict. He also makes a case of pointing out that the reader is probably part of such a bureaucracy, and is thus another sort of agent:
"Which sorts of thoughts concern you the most - the orders you are made to take or those you're being forced to give?"
Memory is the base for cooperation between agents; when "on the same level" agents collaborate on something (which requires a sort of graph, as mentioned above), memory comes into play. It basically seems to store parameters for the agents that have to run.