Politics (or: “Fun at parties”)

Politics is a distributed system — one of the many that humanity as a whole is running. It’s a set of algorithms (interfaces and their implementations) that influence the present and future state of mankind; it’s also all individuals as a computing mass communicating within the protocol that they choose or have to bear (tyranny, capitalism, socialism, communism) to the highest level of abstraction possible, that which concerns itself with the ways in which we want to live, or be, together. Consider that this system runs within the constraints imposed by our limitations; the fact that we are humans with inefficiencies, both in our characters and in our bodies; and that we communicate mostly with pained ambiguity and at a low information transmission rate.

Now picture a world where all our characteristics can be tinkered with and improved on; and where the society we work in, the distributed system manifested in our interactions, can move at the speed of silicon and Von Neumann architectures — or better.

That which in our present world takes 50 years — the political debates that societies go back and forth on throughout the years — would take in this world of silicon only seconds. The abolition of all forms of sexism and segregation may take the blink of an eye, if still pending; perfect equality (that which maximizes the total amount of freedom) two seconds, with the fairest possible distribution of the sum of the world’s wealth thrown into the mix.

What could come after this? So freed from its limitations (its shackles, if you like a metaphor), how would humanity advance itself further? Which measures would it take next, and in which ways would it choose to alter itself to keep improving? It would take only a few more such iterations for that world and its individuals to become pretty much unintelligible to us, present day society, so at this point the interest usually weakens. But what we’re all doing here, now, is writing the programs and maintaining the systems that may still run in that hypothetical future, in some shape or form — heavily refactored, and unshackled.

Anyway, this is what I say nowadays when people bring up how they’re tired of politics at parties.

How to beat procrastination

How to beat procrastination: first, you have to decide to beat it. To do this, you better research known methods first — then you’ll be able to beat it more effectively, and you’ll do what you have to do (the most important thing almost on your mind, the task not quite at hand) next.

Which you’ll do right after you finish your research. It turns out there are several methods to beat procrastination, including but not limited to:

  • Organizational methods, like Getting Things Done. This one is promising, and that’s such a good title — that’s what you want, you want to do things. Like the things you need to do. Which you’ll do after digging a bit deeper.

You might as well download the ebook now, as it’s better to go straight to the source of things when researching. The Kindle version on Amazon seems a bit expensive — perhaps pirate it first then, test before you buy as they say, and libgen.io is so convenient — it’ll take a few minutes to get the right version that works on your Kindle, but then it’ll be smooth sailing, or perhaps you’ll have to fire off Calibre to convert an .epub to a .mobi (why did Amazon not include .mobi support?), but hold on, where is your Kindle again?

2019-02-03 (or: “True Colours”)

This past one was a pretty draining week, but a good one. We had visitors over — and I took a week off from my day job to attend a full-week workshop. The workshop was great, and the visitors were too — no complaining really, as I said it was a good week, but both L. and I ended up pretty tired due to all the obligations, (at times) stress and socialization. We are like this regardless of how much we like the company or the activities we do usually, so I’m glad the second half of this Sunday we had free.

I thought about an idea for a short story having to do with machine learning: in the future (10 years from now? 20? Does it matter?) AI models run the world (for governments and/or companies). Most of what humans do is data gathering for improving the models; labelling data for the computers, who “want” to improve the quality of their predictions and decisions; essentially investigating empty cells in the datasets that computers run learning algorithms on. Like, for example, figuring out what is the true colour of a certain fruit in sub-saharan Africa (the example coming from an introductory video in ML that I was just watching, where they mention a toy model that classifies fruit based on its colour). I could probably call the story “True Colours” if I were to actually write it. It’s also a Cindy Lauper song, which I guess is fine.

Best URL of 2014

That’d be this 2014 post on the topology of manifolds and how they relate to neural networks. The visualizations are great, and it basically blew my mind. I didn’t know of the manifold hypothesis until now.

The manifold hypothesis is that natural data forms lower-dimensional manifolds in its embedding space. There are both theoretical and experimental reasons to believe this to be true. If you believe this, then the task of a classification algorithm is fundamentally to separate a bunch of tangled manifolds.

I don’t understand the whole post, or the whole argument (yet? not to a great level of detail anyway), but: if you want to build a neural network that distinguishes cat and dog pictures, in the worst case that would seem to require a huge network with many more nodes/layers (say, a function of the size of the image) than the number that seems to work reasonably well in practice (six or some other low constant number observed in reality). So the number of dimensions over which the “images” are potentially spread is huge, but it’d seem that in the real world one can rearrange the dog and cat images in a “shape” that then allows for relatively easy disentanglement, and these shapes can probably be realized in much lower dimensions (as per the example, six?). This could explain the observed predictive power of relatively small neural networks.

P.D.: almost unrelated, but the author’s pic is pretty awesome. I mean it.

Pearson

I didn’t know about Pearson correlation coefficient until today. It seems like such a useful thing. From Wikipedia:

  • It is a measure of the linear correlation between two variables X and Y.
  • It has a value between +1 and −1, where 1 is total positive linear correlation, 0 is no linear correlation, and −1 is total negative linear correlation.

So, if you have points in the plane, it can tell you how well they match y = x (that yields 1) or y = -x (yields -1) or neither. In ML, it can tell you how likely it is that two features (variables/”input columns”) are not independent; and how likely it is that a feature will add information to a model (that happens if Pearson(feature, target) is either close to -1 or 1).

I’m glad I know about it now.

Friction

I keep telling myself: if only there was zero friction in the act of writing — if thinking and writing were somehow almost the same, say — I’d write more frequently, and people in general would as well. I’d like to have better input methods than the ones we have available, in particular when I’m on my phone. I find writing on the phone awkward and at times frustrating to the point I want to give up. Perhaps this is because I’m not young enough to have grown up with touch interfaces around me, or perhaps it’s because I know how pleasant it is to touch type on a “proper” keyboard, to which extent the keyboard can almost disappear when you get into a groove.

Friction is also the reason why I spent some time choosing the “right” blogging engine and writing scripts to make writing and publishing posts “easier” (for my definition of easier). In a sense this is part of my personality; I try to automate or polish away any friction because I find it annoying, and I want things to be simple and regular in the day to day. This is how I ended up with a job in IT; it was part of wanting to know how computers and networks work, part being lazy and having the impulse to automate away friction. It turns out these are assets and enough to land you a job. But it can also become yet another form of procrastination. In programming lingo, yak shaving.

2019-01-26

TODO for tomorrow:

  • Study Machine Learning for four hours — it’s quite a bit but I need it to attend a workshop next week.
  • Put one Pomodoro of work into a short story idea I came up with today.

It’s hard to write — I’ve been meaning to write here for close to ten days but I never seem to get round to it. I travelled, I then had lots of work, I felt tired. Of course these are excuses to a point, but I hold no grudges against me I guess; it’s just kind of hard. Although I do find the time to brush my teeth and take showers, so perhaps I should find the time to write for just five minutes or so a day. It’s just trying harder.

Now I’ve had the idea of titling my posts after the date in which I write them, and keep an actual sort of diary here. I know, it’s not a new idea at all, that’s essentially what a blog is, but up to now I had thought about this blog as more of a collection of ideas and random occurrences than an actual diary in this structured sense.

Minsky’s Society of Mind

I just started reading Marvin Minsky’s “Society of Mind”. It’s a classic in the field of AI, from what I’ve read about it. I’ll try to post notes and potentially commentary about it in this page.

One thing I like so far is that it does not shy away from touching on the most philosophical or profound, and sometimes mentions the Big Questions — although it also makes it clear that answering those is not in scope for the work, and may even be impossible.

The agents of the mind

Minsky characterizes the mind as essentially a distributed system made of simple parts that run simple programs — agents. Agents can be subordinate to others, which yields a tree (he implies there’s some further interaction that could turn this into a graph, but doesn’t go into much detail for now). I liked how he pointed out that every agent, and in particular the base ones, should be simple; if one ends up with an agent that seems “smart”, one has only succeeded in hiding consciousness/intelligence in yet another black box.

Easy things are hard

This part is classic; I’ve gotten the same idea from elsewhere (perhaps Wikipedia, perhaps some talk) so it’s become sort of canon (or a cliché?). The core idea is that one tends to underestimate the complexity of things that are relatively intuitive, like movement, and overestimate others like analytical thinking. Many intuitive behaviours are actually the crystallized form of a very complex behaviour that one acquired (in many cases painfully) during childhood. This also goes for basic reasoning that one now thinks about as “common sense”.

Conflict and bureaucracy

He points out further that the agent tree is shaped in many forms like a human bureaucracy — say, a company. The implied hierarchy of agents has a part in resolving conflict. He also makes a case of pointing out that the reader is probably part of such a bureaucracy, and is thus another sort of agent:

Which sorts of thoughts concern you the most - the orders you are made to take or those you’re being forced to give?”

Memory

Memory is the base for cooperation between agents; when “on the same level” agents collaborate on something (which requires a sort of graph, as mentioned above), memory comes into play. It basically seems to store parameters for the agents that have to run.

German

German is harder than I thought it would be. Sincerely, I think at some level I honestly thought I’d just pick it up without having to study it properly — that was the case with English, and I reasoned (dumbly, I know now) that it couldn’t be much harder now that I knew English and had that as a base.

Of course, I learnt English when I was younger. And I did study it properly (academy and all) at some point, although I knew reasonable English before then (or so I believe). And German is more complex — it just is, with its declensions and word chains and what seem like many (too many?) grammar rules.

It could be simpler, that’s for sure. It feels as like its historical speakers just weren’t that interested in simplification. Latin speakers did — romance languages are simpler than Latin. Declensions were dealt with by late Latin speakers and were replaced with a relatively easy to grasp (to me anyway, and of course I’m biased, but to some linguists surely as well) set of propositions. Alemannic speakers, on the other hand, were just a bit too earnest and decided to keep with the old ways. Now I have to deal with it. Eventually.