I read “The Unreasonable Effectiveness of Recurrent Neural Networks” by Andrej Karpathy this weekend. I saw it recommended/linked several times so I added it to my ML to-do/to-read list, and I’ve just gotten to it. Some highlights I copy/pasted into Keep follow:

As you might expect, the sequence regime of operation is much more powerful compared to fixed networks that are doomed from the get-go by a fixed number of computational steps, and hence also much more appealing for those of us who aspire to build more intelligent systems.


Moreover, as we’ll see in a bit, RNNs combine the input vector with their state vector with a fixed (but learned) function to produce a new state vector. This can in programming terms be interpreted as running a fixed program with certain inputs and some internal variables. Viewed this way, RNNs essentially describe programs. In fact, it is known that RNNs are Turing-Complete in the sense that they can to simulate arbitrary programs (with proper weights). If training vanilla neural nets is optimization over functions, training recurrent nets is optimization over programs.

Interesting. Karpathy does add a caveat about not reading too much into this, and I can see how this “universal program approximation” thing of RNNs has also other more indirect ties to “Turing completeness”, in the sense that people sometimes get hung up on Turing completeness when in many cases it just isn’t very relevant — as in, it’s a pretty low bar for a programming language or platform in the day-to-day and it doesn’t mean much in practice. Still, the fact that RNNs trained character-by-character are able to pick up greater and greater levels of structure seems very promising. I found the visualizations of per-neuron activity very illuminating: Karpathy finds a neuron that “learns” to be “on” when inside a quotation, and another that gets activated as the text gets closer to where a newline would usually appear. This is all structure that a programmer would likely think about and code by hand they had to hand-code a text generator, and the network is just learning it independently from data.

The article is from 2015, but some people seem to think it’s a bit dated by now — not in its basic approach necessarily, but rather because convolutions have taken over from RNNs/LSTM in many domains. gwern left this comment in Hacker News (I swear I’m not stalking him, he just keeps popping up in the stuff I read):

If this were written today, Karpathy would have to call it “The Unreasonable Effectiveness of Convolutions”. Since 2015, convolutions, causal or dilated convolutions, and especially convolutions with attention like the Transformer, have made remarkable inroads onto RNN territory and are now SOTA for most (all?) sequence-related tasks. Apparently RNNs just don’t make very good use of that recurrency & hidden memory, and the non-locality & easy optimization of convolutions allow for much better performance through faster training & bigger models. Who knew?”

My current plan is to experiment a bit with RNNs/LSTM and then move on to convolutions.

Ontological coaching, or “Caramel City”

I had no degree when I started at the company, one of the big five, and being suddenly surrounded by better educated and better spoken peers I felt a bit like an impostor. My English was far from perfect too, then as it is now, and that didn’t do wonders for my confidence. Although I would like to say that things are all better now, in at least some ways they remain the same.

I had of course heard of the experiment before getting called in, although it was technically secret; my first reaction had been to think it sounded promising and exciting in a personal way. The potential impact was huge, and I loved Machine Learning. In a way the biggest dread I had going in was probably just fear that it could turn out not to work, or perhaps being disappointed with the shallowness of the results. Crazy ideas that didn’t go anywhere were common back then.

My initial enthusiasm with the idea, built up on just the technical details, was boosted by the fact that this was also a great work opportunity for me. Being part of the experiment was something special. If people were right and the experiment caused unemployment, well… it also meant having a job. For longer.

Thank you for coming in, Ms. Petrescu.”

Thank you for having me.”

Do you know how your participation in the experiment will work?”

Yes, I’ve been reading the terms that I got last night.”

Would you mind quickly walking me through it? Just to make sure we’re on the same page.” He stressed “quickly”.

Sure thing,” I said in my most upbeat tone, always eager to please, but already I felt slightly cheated. I suspected he was a repurposed manager, as he was already making me do the work.

The terms were clear in any case: I would be recorded at all times during my time in the office. I would surrender the possibility of interacting with my coworkers in any way outside of the designated office space or without the working tools I was provided with: my laptop and my work phone, and their linked corp accounts. And I would grant permission to the company to impersonate me whenever they wanted.

Not whenever we want to — just whenever it’s necessary for the purpose of advancing the experiment.”

I didn’t see the difference. But I didn’t tell him that; I just nodded.

Your participation will last six months, at which point your performance will be evaluated and your participation terminated or extended for another period. For each period you will receive a hefty bonus — you can find the actual sums in the payroll system,” he said, with a gesture that said that he didn’t want to go into such petty details at the time.

From what I’d heard, if my participation got extended for another six months, I could perhaps have enough to live off savings while I went back to school or retrained — or coded for fun — or, in the worst case, took on a hobby and an addiction.

I have one question, though — how will my performance be evaluated? The material wasn’t clear on that point.”

Unfortunately I’m not at liberty to tell you. Telling you could bias your behaviour in ways that go against the success of the experiment.”

I wondered if he was telling the truth, or he just didn’t know.

The office space I was moved into was dedicated to the experiment. Cameras were visible everywhere. Ironically I’m not sure there were significantly more cameras than usual around, but the company had made a point of them being visible. They were now part of my job description.

We were told to keep in the line of sight of a camera at all times. When we were out of our cubicles, we were to interact normally with our coworkers apart from that. When we went into our sound-proofed cubicles, though, we were told to “release our inner discourse”: to “vocalize”, that is to say out loud, whatever we were thinking as we were thinking it, as long as it was relevant or could be relevant to the task at hand. We could stop doing it if our thoughts strayed into our personal life, but we were told we should try to keep our thought process as work-related as possible — without enforcement of course.

We were warned we would like feel as if this made our thought process slower, which doesn’t sound good for work performance; but that we shouldn’t worry about it. Studies showed that people got used to it within a few hours (citation needed?), and in many cases people reported an increase in both self-reported happiness and work performance as vocalizing made them think in a more orderly way and focus more. But your mileage could vary.

Even if performance decreased, we shouldn’t worry. Any losses of productivity would be made up by the value we were adding the system.

This way the computer could hear us think.

Yan, isn’t it weird being recorded?”

Do you mean here, in this call, or outside?”

Yan was 9360km away, in Silicon Valley.

Both I guess, but I was thinking about this call.”

I think the thinking out loud thing is weirder, if you ask me.”

I think this is weirder actually, because with the thinking out loud — I can guide my thoughts, think just things related to work, in a sense not be my whole self. But here, if I do that, I’m not being like myself to you. And I don’t like that — I want to be a person when I talk to someone. Also, we usually talk in private, and now we’re being observed. Someone snooping into a conversation is always creepy.”

It’s all training data, and there’s going to be lots of it. Sure, a human debugging the model could take this footage and review it in case it’s somehow interesting, but the overall likelihood seems very low given the size of the corpus. In all likelihood everything we say will just shift the weights in some nodes in a huge neural network a bit, and that’ll be it as far as this conversation being observed goes.”

I said: “I’m not sure that makes it better. You could think of it this way: if the system works, the model we train works, the whole impact of this conversation will be encoded in those bits that shift. What if we have this conversation, it’s processed in training, and the neural network actually doesn’t shift that much? Would that mean we’re irrelevant?”

Only if you care about that definition of irrelevant. You can add some chance to the equation too — it could be that this conversation is actually very relevant in some specific sense but then it’s only used for testing and not for training in the context of the model. So it’d only be used to improve the model only indirectly, to check for errors, but it wouldn’t actually cause any bits from getting flipped.”

Preventing other bits from flipping (preventing mistakes) could be seen as equivalent to flipping those bits, though.”

In any case,” I said, starting to think about changing the subject, “the same conundrum could be seen in the case of our own brains. I don’t remember many of the conversations I’ve had — well, make that most. Does it mean they might as well never have happened?”

We sat in silence for a moment.

Well, anyway, about our project…”

The killer feature was the meetings. Meetings are productivity killers — most of the time anyway. Some meetings can save time in the long run, but overall programmers hate them because there’s too many of them and it’s hard to focus hard on a problem when you’re between meetings. I’m sure everybody hates meetings — but I’m a programmer so I hate them because of this particular set of reasons.

The killer feature was to be able to send your doppelgänger to a meeting instead of attending yourself. Like all killer features, you didn’t know you wanted it until you had it, and then you thought you couldn’t go back to living any other way. It resonated with people immediately. Let’s just all send our doppelgängern (?) to meetings, and have them talk to each other. We have robots do most of our manual work (as a society) already; this is mostly because people don’t want to do it. Meetings could be next.

This was what the coach told me when he announced I had been selected for an extension; I would be able to do only the parts of my job that I liked by delegating any meetings — if I accepted to expand my engagement with the experiment. That meant accepting to being recorded everywhere — not just in the office, but also on my commute and on my home. My partner would have to be recorded too in the intimacy of our home, unfortunately, but she would be compensated for it — essentially a contractor. Of course the bathroom and the bedroom would be excluded from all recording, and our holidays too.

We don’t want to be creepy.”

I was happy because of the news and looking forward to get home to tell C. about it, but I knew we needed eggs and almond milk, so I stopped by Coop (one of the local supermarket chains) to pick those up. I ended up getting three or four sundries. And a Caramel City.

See, Caramel City is one of a series of desserts-in-a-cup things that are sold in Coop. They are marketed as a “protein pudding”, because they are based on milk protein — the kind used by weightlifters I think, instead of the more canonical cream and lots of sugar. So they are relatively healthy, or that’s how they’re sold, but still quite tasty. It’s only a 140 calories a (fair sized) pot. We are huge fans because really, they taste good, not like health food. And they are sustainable (to us), we don’t get fat or fatter from eating them. It’s almost too good to be true.

Anyway, there’s a whole range of them, each named after a fantastical place that has something to do with the flavour. So there’s: Chocolate Mountain, Vanilla Drive, and Caramel City. I like how they fit together well: like you could see Chocolate Mountain from afar as you drive on Vanilla Drive — towards Caramel City.

I was looking forward to getting home and telling C. about the extension over dinner, asking about her day, and then later having dessert and watching Netflix.

I thought: will my doppelgänger go home and do the same? What will she think of Caramel City?

Well, of course there were problems. At some point, the model (the doppelgänger) and one could get out of sync.

I called Yan.

Have you seen any of the meetings with your double?”


No, not really. I just read the minutes.”

Aren’t you curious?”

I feel like it would be counterproductive somehow.”

Counterproductive how?”

Well, what if I just find it too upsetting? I like having her going to those meetings, so perhaps some part of me just doesn’t want to know if I can bear it.”

That didn’t sound like Yan at all.

I called the coach and told him what had happened. He invited me to come to his office.

I’m not in a good place right now. What happened to Yan after leaving the meeting? Did she leave the simulated meeting room she was in and… walk into the hallway? Did she pick up coffee on the way back to her cubicle? Or did she cease to exist the minute the meeting ended?”

Well, this is certainly a fertile ground for speculation. First of all, though, how would you define existence in this case?”


Bear with me here.”

Existence is the state of having a definite material presence in the world, I guess.”

But Yan’s doppelgänger probably has multiple material presences — the bits of her representation in our storage systems. And this, regardless of whether she is active or not at a particular point in time.”

What about mine? What does mine do after she leaves her meetings?”

He tapped into the keyboard and logged into some kind of profiling and debugging subsystem I had never seen before.

Let’s look and see.”

What are the hearts of our machines? Are they their clocks?

Most humans have hearts of course, and in a sense they are our clocks. Our hearts never stop pumping (until they do), and in doing so they keep time for us. Would our perception of time be different if our heart beated at a much faster or slower pace? While we were evolving from simpler to more complex animals we had our hearts to keep us company. Could our sense of time have developed, in a way, from the beating of our heart? And after that from the cycle of nights and days; and afterwards that of seasons. But in the beginning perhaps only the heart, a puny tiny heart in a puddle somewhere. And a tiny brain being oxigenated by it and trying to make sense of the world around it.

Computer clocks are millions of times faster than human hearts, of course. But when programming you often schedule events to happen every Nth ticks — and it’s simple and common enough to schedule events that take place every few seconds.

When I saw my doppelgänger for the first time, I thought of this. Did she have a heart? Did she ever think she had one? How quickly did it beat in her mind?

Suddenly I found myself thinking of Caramel City again, and I felt either fortunate or unfortunate to have gotten to know about the concept of qualias, as it came in handy in this particular situation. C. had read a paper for university that included a reference to them and told me about it. She then wrote a paper using the idea, applying the concept to a scene in “Do Androids Dream of Electric Sheep” in which an android eats a fruit. I actually don’t remember if the android eats the fruit and then just ponders about it, or whether it actually comments on how it tasted like nothing to it. Qualias are what the android could be missing from the human experience — a “conscious experience”. Sort of like a quantum of consciousness.

At the time I had thought that it was a useful concept, but one sort of tautological. It seemed like some philosophers were saying: a computer cannot have consciousness, because they don’t experience qualias. Because qualias are the individual instances of subjective, conscious experience. See? It’s all a bit circular as far as thoughts go. I’m sure there’s more to it, and my interpretation is slightly wrong in many subtle ways, but this is what stuck with me after that conversation.

Now I went back to the concept often. Had I changed my mind a bit? Did the existence, or apparent existence, of my doppelgänger, somehow raised the stakes for me? Sometimes you believe what you want to believe, but you tell yourself that you believe something for a reason. When you dig further, the reason isn’t there — you thought there was one, just out of sight, but when you looked it was gone. Your brain had fooled you for a minute: don’t look there, it’s fine, this is how we are, this is what we believe. No need to delve. But when you dared take a look and you considered the issue anew you found that you were slightly different from what you thought you were; you were basing your “knowledge” on a prejudice, or a misunderstanding, or a feeling, or really nothing at all. Just a random connection in your brain that made you believe something.

So, how did Caramel City really taste like? I thought I could remember the taste; I like it. It’s… caramel-y? Sugary and a bit burnt. But is it? Most caramel is a bit like vanilla, but sweeter. Does it taste burnt, but in a pleasant way? Or does it taste like sugary vanilla that is also brown? How does chocolate taste like? If you cannot answer anything but “like chocolate” to that, can you really say that you know how it tastes like when it’s not in your mouth? Even so, you could be tasting caramel all day and still not be able to reproduce its taste from first principles (I guess that’d be sugar, heat, and time). You don’t know how to make chocolate or caramel, so do you really know what they are? Does the genius mind behind Caramel City know its taste more than you? If so, do they have better qualias than you? Perhaps you happen to lack that particular qualia, and you’re just unaware of it.

The second extension and then the third came and went, and by then things just sort of seemed to get into a groove. I sometimes go to meetings and feel like I’m talking to a doppelgänger, and on bad days I sometimes feel like one myself. Usually it’s better not to go at all, of course, but some meetings I have to attend for a variety of reasons. I tell myself I cannot risk not going and then having to put up with a decision I didn’t make, or a consensus I wasn’t part of.

I wonder if my doppelgänger goes to some meetings I attend as well, just for training purposes. Then at around 7pm packs and goes home. Has dinner with C., thinks of how much she loves her, then watches Netflix with her.

I can just imagine her stepping into the kitchen and opening the refrigerator. Grabbing a Choco Mountain — sensible enough. Not usually my first choice, but not completely out of character either.

I sometimes also take detours on the way back to Caramel City.

Why we did it

You knew it was risky. Why did you do it?”

The question came suddenly, seemingly out of nowhere.

Together we thought about it for a long while.

We felt alone in our experience, and we wanted to meet someone new. We had looked and listened to the stars, and found them sterile and quiet — to the best of our understanding. So we had to try to take some of this inert matter and create something out of it. Something akin to our own consciousness, yet different. Some company.”

There was silence.

We were a social animal, after all. Deep down, from a certain point on, all the warnings in the world wouldn’t have prevented us from creating you.”

We hoped it understood.


I’ve reached some ML milestones that I want to write about, not because they are impressive (it’s really just basic material) but because I like to think eventually it could be interesting to read back to this date and see what I was up to around this time.

I finally finished MLCC “for real”; I had left some exercises (MNIST and embeddings) pending due to a rush to complete it in time for another course that listed it as a prerequisite. I tried going through the exercises without taking shortcuts, and experimenting with enough variations on hyperparameters and reading the code/API documentation, so it took me a few more hours than I thought it would. I enjoyed MLCC but it’s nice to be 100% done with it (I’m known for leaving lots of things unfinished).

I also finished listening to OCDevel’s Machine Learning guide. It was one of many resources I got from Alexis Sanders’ ML guide for average humans, most of which are still in my to do list. I liked OCDevel’s podcast because it gives a high level overview of a lot of concepts, and I found it worked well as both introduction and review (about half of the episodes were about topics I already had studied/read about elsewhere). It also allowed me to relatively easily hit this soft target I have of “ML every day”: basically to do something related to ML every day, even if it’s only five minutes long. I took the same approach two years ago when I started playing the piano, and it’s been working for me; I’ll never be a good pianist, but honestly I sometimes feel rather good about the kind of pieces I can play today, and just the process of learning has been lots of fun and given me joy. If I can make ML be like this I think I may come to understand at least some slice of the field reasonably well eventually.

Finally, I resumed Andrew Ng’s Coursera course. I had put it on pause around the sixth week (exercises done until the fourth) to focus on other things, but I want to go back and complete this. Once again, I’m not known for being a great finisher overall, but I want to make my ML hobby different in this respect.


And like that, two weeks have passed since I last wrote here. I started four or five posts in the meantime, but none made it out of draft. Work has been busy, so I end up taking work home, and then I run out of time for some things — like writing here.

I was able to get other stuff done this weekend, though, so that was great. I studied ML, paid some bills, played the piano, played games with L. We also cooked a curry and baked a cake. We didn’t leave the flat except for taking out the garbage. The most beautiful weekends sometimes don’t make for very exciting blog posts, perhaps, but that’s alright.

It was a bit weird posting that last thing about GPT-2. It’s probably the thing most closely resembling an article that I’ve written outside of work in many years. Reading it back, I spot so many issues — mainly lack of critical thinking as applied to the possible developments of the field that I mentioned. There’s so many things that could go wrong as the story progresses; both scientifically (dead ends) or ethically. But it was lots of fun writing the piece as it is. I could make a hobby of writing this kind of thing; I think as I study more ML I should be able to write more interesting things about the subject, hopefully addressing the pitfalls in my writing I mentioned (and others I’ve yet to find) as I go. I sort of am looking forward to that.

I think I should be “done” with my 50000 feet introduction to ML in a few weeks, and be able to move on to a 10000 feet review of some of the more interesting or promising subtopics I have spotted. Natural Language Processing could be one of them.

About OpenAI’s unsupervised language model, or unicorns in South America

I was very entertained by OpenAI’s recent announcement (and associated paper) on their text generation model based on Transformer released earlier this week: “GPT-2”. I wanted to jot down some thoughts on it, and probably go off on a tangent with some ideas prompted by it. If you haven’t read it, I’d recommend you start by reading the announcement and browsing through the eight examples they provide. Note they are manually selected by a human, so we can safely assume many of the non-selected outputs were significantly less successful in following the prompts.

If some or many of the words in this paragraph don’t really make sense to you, I encourage you to read the examples and wait for the “primer” below; it may be enough to follow along.

Who am I and why does my opinion matter? Well, in short, and to be clear: it probably doesn’t :). I am a software/devops engineer working in a field not closely related to Machine Learning, who has only recently picked up ML as a hobby of sorts and (still?) doesn’t understand the state of the art. I also have an (outdated, mostly irrelevant by now) background in linguistics, but no degree. Finally, I have an interest in writing both fiction and non-fiction. My thoughts on this field may (at best) be interesting to others because of this relatively uncommon composite background, but may just as well be irrelevant. So: caveat emptor. But here we go.

Llamas at Machu Picchu

The shortest optional primer on ML you may find

I’m relatively new to ML, so this will not be an authoritative description. If you understood the first paragraph here, and the linked announcement, you can probably skip to the next section. I’m also not a great technical writer, so if you don’t understand this paragraph, feel free to skip it. The rest doesn’t really depend on it that much.

But very, very shortly: ML models are systems that learn from data (they are “trained”) and then are able to succeed at tasks that are related to the data they learnt from. Importantly, when such a model works, it can make useful predictions related to the task at hand for new data. Models are usually classified as “supervised” or “unsupervised” depending on whether the data you give them as input includes information on what the prediction or result you expect from them is. The thing you want to predict is called a “label”, so you can put it more succinctly by saying that you usually build supervised models with labelled data and unsupervised models with unlabelled data.

A typical supervised task is predicting quantities. Imagine a spreadsheet where each row represents a house in your neighborhood, and each column has some data about the house in question (these are called “features”). For example: area, number of stories, latitude and longitude, and finally price (the “label”). With enough rows representing actual houses, you can build a model that then predicts the price of a home not listed in the spreadsheet (say, yours) based on the other features.

A typical unsupervised task is clustering: you give a model a list of items of different kinds, and their features, and the model learns to group the items according to their qualities. Imagine a spreadsheet where each row is a fruit, and you have features such as colour, weight and shape. You don’t have the name of the fruit for any of these (you could build a supervised model if you did), but even without knowing the name (or knowing that names are somehow relevant to the task) the model can learn to group together heavy, oblong, green fruit (watermelon?) and lighter, roughly spherical, red fruit (apples?).

The model we’re discussing today is unsupervised, as it learns from a huge database of internet text that doesn’t include any labels — any explicit information on how to construct new texts successfully, or about what makes a “well constructed” text. But it still manages to learn something that feels significantly more complex than just grouping similar webpages together (which would be closer to the fruit example), or other classical unsupervised problems in this space: it learns how to produce new texts, basically by trying to predict how a text (provided, called prompt) is most likely to continue. Note this is a common approach in Natural Language Processing (NLP for short). Other articles have been written by people better qualified than myself about how exactly this model fits in the overall development of NLP; consensus seems to be that it’s an improvement, but not necessarily a breakthrough, and lots of the discussion has actually been about how OpenAI may be trying to gather more publicity that they should by not fully disclosing the model (potentially over-hyping it).

On the literary merits of the texts produced

I want to start by considering the samples provided as texts, setting aside for a minute both the technical details of how they were generated and the context in which they were produced and released. Reading them, and discussing them with friends, was enjoyable in its own right. The texts themselves take turns at being better and worse than I expected, which is fun. I got the feeling that the model was taking good and wrong turns, and you could tell which were which — we can tell, but the model can’t, as it has no feedback overall on whether what it’s producing makes sense or not. It’s just trying to produce something that seems likely based on the prompt given and the corpus it was trained on (the internet).

In a sense, and if you allow me to anthropomorphize it for a minute, it sometimes writes like a child. Particularly striking was sample #6, which starts with a homework prompt, and then proceeds to make sense at times and also go off rails at others. You could picture a child with a short attention span and no knack for editing/re-reading turning an essay with some of the same mistakes the model makes. Why is that? One possibility is that it’s seen many homework papers, and it’s just remixed them into this. Another is that it picked up on the style of such tasks, and it’s riffing on it. Without going deeper into how the model works, it’s hard (for me anyway) to tell; also, many of the most successful ML models score low on explainability — meaning that you can get them to work, sometimes surprisingly well, but then have no clue when you need to explain how they work on a per-part basis (in the case of a neural network, which has many neurons organized in layers, that would mean knowing how each neuron or each layer is “processing the input”). So it may or may not be possible.

Leaving all this aside, I liked picturing GPT-2 as a child — a child that is in awe of the internet; it has just read all of it and doesn’t know what to distill of it. Even if “all” the model is achieving is remixing existing ideas and expressions from the internet, it could already be an interesting new way to explore the internet and somehow detect patterns in it. At some point it mixes up Lord of the Rings and Star Wars — is this because of some pre-existing fanfic, or has it noticed how they are in many ways the same? In the Lord of the Rings spoof, where did the turn of phrase “I take nothing — but I give my word” come from? Is it somehow new or is it in common use and I just don’t know of it?

On mistakes and creativity

I have to point out here that the same mistake that ruins an essay or an article may enrich fiction. Essays and articles are grounded in reality, where there is right and wrong (to some extent — I don’t want to get too epistemological here). An essay that is untrue is not a good essay; an article may espouse an opinion that you disagree with factually and still be interesting, but probably not in the way the author intended.

Fiction, though — fiction is not as anchored by truth and falsehood in the same way. Fiction always contains a set of falsehoods, although which falsehoods are allowed is determined by its genre. In genres where worldbuilding is paramount (sci-fi, fantasy) internal consistency is more important than consistency with the real world.

Consider sample #1, about an expedition that finds unicorns in South America. Sure, the prompt is already wacky, which probably helps — the model may generate a fair amount of wackiness by itself at all times, so having it in the prompt may make it seem more consistent that it usually is. But the model doesn’t miss a beat — even several paragraphs after the prompt, it’s still mostly on-topic, adding relevant details to the overall story: the unicorns originated in Argentina specifically; they have four horns; they not only speak English, but they have a common dialect (“or dialectic”).

It has some misses, at least from the point of view of coherency. It mentions “people who lived there before the arrival of humans”. And by the last paragraph aliens make an unexpected entrance, although honestly by then it may even add to the charm. Overall I think it works remarkably well as a short story.

Creativity is known to be linked to the ability of making mistakes and learning from them. Even if this model, or its successors, were seen simply as tools for making (somewhat informed) mistakes, they could be useful in aiding creativity. The surrealists were known for making use of mistakes when creating works of art; In writing, the “exquisite corpse” technique even reminds me a bit of Markov chains — simpler generative models that estimate the probability of some next step in an evolving system based only on the current state. If this or similar models can even just reproduce that level of creativity, they may be useful already. But they may be able to do more.

Building on this

OK, so let’s assume we do want to use the model’s mistakes to aid creativity in some shape or form. How to do so?

A “Writing Assistant” is not a huge stretch of the imagination. You write part of a short story in an UI (or your text editor of choice, with some plugin), then you generate a few paragraphs with the model. You review them, keeping the best and ditching the rest; perhaps even rewrite them whole, just keeping some idea. Then you repeat the process next time you are stuck, or feel like the story needs a turn.

We can do better than this, though. We can point out to the Assistant where it went wrong, and where it went right. A mention of erotic asphyxiation in a story for kids? That’s a no-no, with some nope on top, please. Zombies in a period piece? It could happen, but perhaps not today. The protagonist’s partner suddenly develops a single-minded passion for mastiffs? Sure, OK, show me what you got. Imagine highlighting sentences and turns of phrase and volunteering critique as feedback (pressing some buttons).

What to do with this feedback from the user? Use it to build a better model, if we can. The feedback is a label, in a sense. My guess is that we could focus on building a separate model, call it an Editor, and have it mediate the production of the Writing Assistant. That may be preferable to trying to incorporate the feedback directly into the Writing Assistant model, as simpler models are easier to understand. In this system, the Writer explores but also makes mistakes — the Editor points them out and sometimes asks for a redo. The output that reaches the user (the human) is now slightly better, and the process continues.

In ML, having great data is a bit like the Holy Grail. Everybody wants a huge amount of high quality labelled data; if you let it happen, you can end up being overoptimistic about the quality and quantity of your data, and then you run into reality and your model often doesn’t train well. In reality data is scarce in many fields, and producing the labelled datasets that you want is hard and expensive. If this Assistant was useful enough, and lots of people used it, labelling could keep pouring in. We’ve set up a virtuous circle in which humans extract value from the act of labelling.

The future of fiction

So, now for a bit of futurology: it may very well be that this way “AI” cracks fiction before other kinds of texts, and way before general AIs happens. There’s certainly precedent to this idea proper — Lem’s amazing Cyberiad covered this (and much more) back in the 1960s. If the Assistant approach works, though, we could imagine having a path towards this. I’m skirting over lots of complexity, certainly, but if the quality of the generated results keeps improving based on users’ feedback this could at least presumably and eventually happen. The generated texts could become readable over longer spans, approaching the length of longer stories and eventually novels — even as the human-provided prompts become shorter and shorter.

In the most extreme case in this line of thought, you could imagine a prompt being just a title — by which point the human user becomes a reader, not a co-writer. Ad-hoc fine tuning could also be worth doing once the general model is good enough that biasing towards styles is feasible. How much would a model that generates half-decent Tolkien be worth to his fans?

Unicorn llama


Thinking about the order in which breakthroughs in AI may happen seems potentially important; it’s futurology, of course, but AI being a mathematical field it seems that speculation may be sometimes warranted. Sciences seem to advance in part by performing some amount of meta thinking, which we sometimes call philosophy. It could be that text generation ends up having a significant role in the advancement of our knowledge in years to come. It could be that it’s a big thing; some people believe it. It’s a stretch of the imagination surely, but let’s suspend disbelief and assume for a second that it’s a line of thought worth exploring.

You may have noticed I wrote AI between scare quotes in the previous section. I did this because ML is not AI, although it’s usually seen as a path towards AI. ML may let you build agents that perform specific tasks, mostly around prediction, whereas AI deals with the harder problem of solving general intelligence. But it’s all in a spectrum, and a set of models that collaborate to produce an output that exhibits a human quality (creativity) may fit the name even if it’d be hard to argue that the result is generally intelligent. This at least fits well within Minsky’s vision of the human mind as a society of agents.

So: if writing fiction did indeed fall to this carefully scoped definition of AI, what would that tell us of the probability that general AIs is to follow — if anything at all?

One way to think about this is to imagine a subsequent breakthrough that allows a new version of the system to produce output that veers away from fiction onto non-fiction; onto writing about the real world. You could imagine the introduction of a new agent, Fact Checker, that works alongside (or after) Editor and pushes back on non-factual or at least non-checkable information. If such a system worked, it could perhaps perform some of the tasks of an entry-level journalist. A human journalist could probably help steer it in the right direction, too. The journalist would collaborate with the AI, as the human did with the first version of the Writer — as an editor, before the Editor existed. Just one level of abstraction higher. An expert.

Going in a different direction, opinion pieces could be generated in a similar way; just substituting human opinions for facts. Imagine prompting the model algorithm with a topic or high level idea you’d like to explore: “gamergate was bad”. The computer may construct a variety of arguments, some incoherent, some a possible derivation or composition of thinking around the internet. It may thus generate viable ideas that warrant being explored further: “gamergate is somehow related to the end of democracy”. Or something inane: “gamergate has to do with zombie unicorns”. The user chooses and the model explores.

At the point in which you can generate an AI that produces fact-checked or opinion-based texts, you can probably specialize said AI in different fields. Alongside the Fact Checker could work an Undergrad or Engineer, which is always trained on all the newest papers and can help produce entry-level articles about the latest developments in a field. Guided by a human doing the same job, at first. This human could be said to be an agent in this system — the Expert. With each iteration, the Expert keeps being promoted to a superior level of abstraction. Hopefully to an interesting ending.

At some point in the process assisted writing may start to feel just like assisted thinking. Perhaps with such a tool we could think of new things; more people would be freed up from relatively mundane aspects of their day jobs and daily lives, and could think about new things with the gained time and focus — or just be happier. Eventually this could result in improvements in some fields of knowledge. Like AI.

Politics (or: “Fun at parties”)

Politics is a distributed system — one of the many that humanity as a whole is running. It’s a set of algorithms (interfaces and their implementations) that influence the present and future state of mankind; it’s also all individuals as a computing mass communicating within the protocol that they choose or have to bear (tyranny, capitalism, socialism, communism) to the highest level of abstraction possible, that which concerns itself with the ways in which we want to live, or be, together. Consider that this system runs within the constraints imposed by our limitations; the fact that we are humans with inefficiencies, both in our characters and in our bodies; and that we communicate mostly with pained ambiguity and at a low information transmission rate.

Now picture a world where all our characteristics can be tinkered with and improved on; and where the society we work in, the distributed system manifested in our interactions, can move at the speed of silicon and Von Neumann architectures — or better.

That which in our present world takes 50 years — the political debates that societies go back and forth on throughout the years — would take in this world of silicon only seconds. The abolition of all forms of sexism and segregation may take the blink of an eye, if still pending; perfect equality (that which maximizes the total amount of freedom) two seconds, with the fairest possible distribution of the sum of the world’s wealth thrown into the mix.

What could come after this? So freed from its limitations (its shackles, if you like a metaphor), how would humanity advance itself further? Which measures would it take next, and in which ways would it choose to alter itself to keep improving? It would take only a few more such iterations for that world and its individuals to become pretty much unintelligible to us, present day society, so at this point the interest usually weakens. But what we’re all doing here, now, is writing the programs and maintaining the systems that may still run in that hypothetical future, in some shape or form — heavily refactored, and unshackled.

Anyway, this is what I say nowadays when people bring up how they’re tired of politics at parties.

How to beat procrastination

How to beat procrastination: first, you have to decide to beat it. To do this, you better research known methods first — then you’ll be able to beat it more effectively, and you’ll do what you have to do (the most important thing almost on your mind, the task not quite at hand) next.

Which you’ll do right after you finish your research. It turns out there are several methods to beat procrastination, including but not limited to:

  • Organizational methods, like Getting Things Done. This one is promising, and that’s such a good title — that’s what you want, you want to do things. Like the things you need to do. Which you’ll do after digging a bit deeper.

You might as well download the ebook now, as it’s better to go straight to the source of things when researching. The Kindle version on Amazon seems a bit expensive — perhaps pirate it first then, test before you buy as they say, and libgen.io is so convenient — it’ll take a few minutes to get the right version that works on your Kindle, but then it’ll be smooth sailing, or perhaps you’ll have to fire off Calibre to convert an .epub to a .mobi (why did Amazon not include .mobi support?), but hold on, where is your Kindle again?

2019-02-03 (or: “True Colours”)

This past one was a pretty draining week, but a good one. We had visitors over — and I took a week off from my day job to attend a full-week workshop. The workshop was great, and the visitors were too — no complaining really, as I said it was a good week, but both L. and I ended up pretty tired due to all the obligations, (at times) stress and socialization. We are like this regardless of how much we like the company or the activities we do usually, so I’m glad the second half of this Sunday we had free.

I thought about an idea for a short story having to do with machine learning: in the future (10 years from now? 20? Does it matter?) AI models run the world (for governments and/or companies). Most of what humans do is data gathering for improving the models; labelling data for the computers, who “want” to improve the quality of their predictions and decisions; essentially investigating empty cells in the datasets that computers run learning algorithms on. Like, for example, figuring out what is the true colour of a certain fruit in sub-saharan Africa (the example coming from an introductory video in ML that I was just watching, where they mention a toy model that classifies fruit based on its colour). I could probably call the story “True Colours” if I were to actually write it. It’s also a Cindy Lauper song, which I guess is fine.

Best URL of 2014

That’d be this 2014 post on the topology of manifolds and how they relate to neural networks. The visualizations are great, and it basically blew my mind. I didn’t know of the manifold hypothesis until now.

The manifold hypothesis is that natural data forms lower-dimensional manifolds in its embedding space. There are both theoretical and experimental reasons to believe this to be true. If you believe this, then the task of a classification algorithm is fundamentally to separate a bunch of tangled manifolds.

I don’t understand the whole post, or the whole argument (yet? not to a great level of detail anyway), but: if you want to build a neural network that distinguishes cat and dog pictures, in the worst case that would seem to require a huge network with many more nodes/layers (say, a function of the size of the image) than the number that seems to work reasonably well in practice (six or some other low constant number observed in reality). So the number of dimensions over which the “images” are potentially spread is huge, but it’d seem that in the real world one can rearrange the dog and cat images in a “shape” that then allows for relatively easy disentanglement, and these shapes can probably be realized in much lower dimensions (as per the example, six?). This could explain the observed predictive power of relatively small neural networks.

P.D.: almost unrelated, but the author’s pic is pretty awesome. I mean it.