Learning Curves

Understanding the properties of a medium is a process of repeatedly testing its bounds.

If you're seated in front of a large panel with fifty buttons, you push them and see what happens. You watch what each one does on its own, and you start to notice patterns. Sometimes complex and interesting effects emerge because two buttons actually interact in a non-obvious way. Any system is like this; you fuck around with it until you grok it. Every time you give it an input, it responds and your mind makes a note of it, developing a more and more comprehensive model of its behavior. At some point, that understanding becomes like a reflex. When you're learning to shoot a basketball, your muscles learn the subtle control needed to give the ball just the right amount of thrust. The faster you can iteratively make these "changes and observations", the faster you learn the hidden properties.  

Sure. Obviously! But in life, there is a broad spectrum of activities, varying in how difficult or expensive they are to learn and master. But some of them don't need to be that hard; technology done right can make them easier.

What's harder: learning a video game or learning how to bake a cake?

Understanding how to play Pac Man takes about 10 seconds. The first time you run into a ghost you know they're bad for you. Meanwhile, learning how to bake a semi-edible cake takes at least an hour to figure out something small, like how you screwed up by using too much milk. Reduce the milk, try again. Taste it. Oops, you used too many eggs and probably took it out of the oven too soon. Baking requires several factors to go right AND has a long cycle time.

In general, the ideal medium gives you a direct connection between your action and a result. Instantaneously. Video Games have really mastered this concept (at least, the well-designed ones like Mega Man X) with tight controls and self-teaching levels. No manuals. You press a button, jump and see how far you can go. The rules of this little universe are illuminated immediately, as you play. Could you imagine if programming were as tactile as that? Try to envision a language + environment that didn't need a manual or any outside documentation. What would that look like?

Difficulty is directly tied to the length of the feedback loop, multiplied by the number of input factors (dimensions). It also depends on whether the information you receive indicates a clear direction when you screw up. If you get a lot of conflicting information or ambiguity, it's not that helpful; imagine if some PacMan ghosts hurt you while others didn't, but they all looked the same and it seemed pretty random. Pretty frustrating, eh? As it just so happens, programming languages do exactly this shit. It's extremely frustrating for novices. 

But keep in mind: software doesn't need to be foolproof. In fact, making foolproof software is extremely time consuming because 90% of your code will deal with edge cases from side effects of side effects messing with other side effects. There is a notion of "acceptable failure" that should be embraced, so long as the cause is made clear and it's easy to CTRL+Z out of it. User error is a perfectly acceptable scenario if the feedback is clear and immediate. You don't blame the designer if Mario falls into a pit; you were warned.

Design for Drunks

The next time you build some kind of product (be it software, a physical thing, or even a physical place or system like the NYC subway), try this thought experiment:

Would this thing still be usable if I were incredibly drunk?

All kinds of interesting challenges for makers emerge from the user errors that happen when you're trashed:

Your vision is impaired. on top of that, you'll be a lot slower cognitively. it's hard to read things... especially small labels. Everything is bleary and I can hardly squint. It's such a huge effort so i'm not gonna bother reading them at all and just cross my fingers and hope it does what it looks like. It's good to make the interface as synaesthetic as possible. If I screwed up and forgot to enter something, instead of being like "Enter a date in the third input field labelled 'End Date'"  just fucking highlight the thing. Otherwise i'm gonna be like "ok where the hell in this page full of form-fields is that one labelled oh shit i gotta read each of these ARGH FUCKIT RAGEQUIT) A heavy iron gate with a big flashing red sign making scary noises is more likely to make me stop and turn around than some unassuming wooden door with a little "Do Not Enter" placard that takes me five minutes to notice while I'm fiddling with the knob. The more sensory dimensions you hit someone with (color, movement, sound, etc), the more likely it'll register through the noise, and the faster they understand what the product is trying to say to them. 

Your pointing and gesturing becomes a lot less precise. You stumble into the bathroom and slap blindly at the wall for the light switch instead of carefully pressing it with one finger. If it has one of those fine tuned brightness knobs? Useless. It's really got only two relevant states now: all the way on or all the way off. It's gonna be really hard to click that tiny little button. TV remote? Forget it. So, you should make control targets nice, fat, and generous. Leave no ambiguity, and make important features especially distinguished. See what exaggeration does... 

Because when you're drunk, lots of things start to look the same. These two switches with the same color laying right next to each other? I'm definitely gonna grab the wrong one now. The N, Q, and R trains all have yellow circles for symbols and stop at the same platform ... I just might jump on the wrong one. 

You stop thinking and just run on habit. If all the light switches in my house turn the lights ON when they're in the UP position, except the one in the bedroom, I'm definitely gonna fuck up. 

So, anyway ... there are even more examples. Next time, just perform this test yourself. Have a few drinks and see if that thing isn't frustrating to use. 

Jonathan Blow: Programming Aesthetics


This is a really great talk by Jonathan Blow, the creator of Braid. It may be titled "How to Program Independent Games" but it should really be called "How to Get Shit Done as a Programmer". 99% of his talk has nothing to do with games at all, and is generally applicable to anyone developing a large quantity of code. He discusses his notion of programming aesthetics (what is "good code") and how that notion has evolved from when he was a student to today.

The real force behind his evolution was the decision to go from having a pile of half-finished projects to finally releasing a game: to "be effective at getting things done". He recognized that shipping a game is a monumental task, and that you must be brutally effective to achieve it.

A few of his points:

Impulses to Optimize are Usually Premature. Optimizations demand assumptions, assumptions constrain you in the future. They typically result in code that is trickier and harder to understand later.

Most code is not performance sensitive. If only 5,000 lines out of 100K were really performance sensitive and you made sure all the code was fast, you've wasted at least 95% of your time, almost certainly at the cost of maintainability. This is true for Jonathan Blow's case where he's thinking about frame-rates and whatnot, obviously. For other applications, like UIs, performance is measured in user performance, and I would state a corollary: most UI elements are not performance sensitive either. Let's say I introduce some speedup or shortcut in the number of gestures needed to achieve X, whittling it from 5 clicks to 4 clicks. But if X is a rare operation (once a day) rather than frequent (once a minute), then I've really wasted my time adding complexity to the code, making the assumptions, and adding the logic to eliminate that 5th click.

And yet someone will point out, "but it's not PERFECT!" Perfect in what sense? Getting down to 4 clicks is potentially flawed in EVERY OTHER dimension of reality that actually matters: time spent & opportunity cost, code complexity, inflexibility in the future, stuffing the UI with busy panels full of random shit, etc. Again, use your judgment and settle priorities definitively. And keep in mind, there are different ways to assess the value of reduction; there are absolutely situations where something occurs rarely but is extremely important to optimize. For example: the initial signup. 

A generalized system is usually worse than a specific/hardcoded one. Typically you think, "oh yes if I make this generalized then it can be useful for all sorts of inputs and do so many things." But if you only actually used two cases out of 20 then you've not only wasted your time but also made your work harder to understand... 

Generalized systems are usually less self-documenting than specific ones. (Such a good point, at 26:00m) In fact, they are downright, literally, obfuscated. When confronting a generalized system, perhaps you initially think it's fine to delete something; but as it turns out, that something affects something else that affects something else and then you have a mess. With specific systems, you can look at them, understand totally what they do, and delete or refactor with confidence.

For example: an endpoint that takes ten optional parameters, yielding twenty  different results depending not only on what inputs you provide but also their values. Is this really better than five endpoints that each do something very straightforward? Is some marginal amount of "code reuse" really worthwhile? Think about it next time: this basic idea about code. Is this function intended to be generally usable or is it intended to be a one-off thing? It's really not helpful to obfuscate that.

Adding additional state is usually more bug-prone than a functional style. Generally, the less state you keep track of, the better. Keeping state is often associated with minor optimizations (to reduce fetches or requests) but often adds far more complexity than it's worth. 

John Carmack 1999

For some reason I really like reading/watching interviews with tech folks from 10+ years ago. Firstly, because it's fun to be like "oh hey remember those floppy disks?!? HAHA"! Secondly, because it's interesting to observe path dependence in a pseudo first-hand retrospective. Thirdly, because it's funny to see that some things never change.

Cool interview w/ Carmack on /.

Q: I once read, in Wired, an article that said you have an incredible headstart on everyone else for making "virtual worlds" on the Internet using your engine from the Quake games. Do you have any intention of doing this?
A: Making Snow Crash into a reality feels like a sort of moral imperative to a lot of programmers, but the efforts that have been made so far leave a lot to be desired.
It is almost painful for me to watch some of the VRML initiatives. It just seems so obviously the wrong way to do something. All of this debating, committee forming, and spec writing, and in the end, there isn't anything to show for it. Make something really cool first, and worry about the spec after you are sure it's worth it!

Funny. Like many other nerd ideas, Virtual Worlds have been tried and re-invented again and again.

And a nice quip about programming in general & what makes a good programmer:

Programming is really just the mundane aspect of expressing a solution to a problem. There are talents that are specifically related to actually coding, but the real issue is being able to grasp problems and devise solutions that are detailed enough to actually be coded.

Good design disappears from the user's point of view

It's very complex to make something simple, by whittling away and reducing until you are left with the essence of the problem being solved. When it is achieved, the technology just "fades away" and you are no longer aware of the tool. It's just an extension of yourself.

One of the things I love about the iPad, for instance, is when you’re using the iPad, the iPad disappears, it goes away. You’re reading a book. You’re viewing a website, you’re touching a web site. That’s amazing and that’s what SMS is for me. The technology goes away and with Twitter the technology goes away. It’s so easy to follow anything you’re interested in. It’s so easy to tweet from wherever you are. And the same is true with Square. We want the technology to fade away so that you can focus on enjoying the cappuccino that you just purchased.
— Jack Dorsey

As tools get better, they reduce the amount of stuff you need to learn to use them. Moving towards automatic transmissions in cars removed the need for a clutch, for the shifter, and most importantly, it eliminated need to understand that the car had various gears at all. The car is just magic.

You shouldn't need to learn about computers in order to use them. 

Lost Interview 1990

Transcript. In case the page is lost forever, a copy of the transcript.

At 36:30

On "Doers" vs. "Thinkers":

my observation is that the doers are the major thinkers. The people that really create the things that change this industry are both the thinker and doer in one person. And if we really go back and we examine, you know did Leonardo have a guy off to the side that was thinking five years out in the future what he would paint or the technology he would use to paint it, of course not. Leonardo was the artist but he also mixed all his own paints. He also was a fairly good chemist. He knew about pigments. Ah knew about human anatomy. And combing all of those skills together, the art and the science, the thinking and the doing, was what resulted in the exceptional result.
And there is no difference in our industry. The people that have really made the contributions have been the thinkers and the doers. And when you, when you ah, a lot of people of course, it’s very easy to take credit for the thinking. The doing is more concrete. But somebody, it’s very easy to say oh I thought of this three years ago. But ah usually when you dig a little deeper, you find that the people that really did it were also the people that really did it were also the people that really worked through the hard intellectual problems as well.
— SJ

The Bicycle for the Mind

Don't forget this:

I think one of the things that really separates us from the high primates is that we’re tool builders. I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.

And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds
— Steve Jobs


re: inventing on principle

So what's really impressive about this is actually not any technical feat but his observations and ingenuity. 

When he points out: "creators need a direct connection to what they're making" (versus "working blind")
It's one of those things we hardly notice in our day-to-day because we take the status quo for granted. Just look around and see how much shit is broken in this regard.

To be on the right side of his principle, there are two parts: the immediate feedback and the direct input.

  1. Feedback. When working blind, it's so much harder to discover things... it's almost impossible to stumble into things by accident. Look at the way he tweaks a couple parameters and discovers a new game concept game based on gravity. These ideas are all over the place and the rate at which you discover them is directly proportional to the rate at which you can experiment.
  2. Direct input. You should be able to make things as fast as you can think them. In his example he uses his hand for animating and the result is 10x faster. It's effective because that's what you wish you could use in the first place; clicking around buttons with your mouse and adjusting some keyframes is an indirect way to achieve what the hand does best: precise motion. 

So here's a question: What if musical composition programs were redesigned with this in mind? You could use your own voice as an input to musical composition, using it to put in a sequence of relative pitches and control timing.

  1. After all, the voice is the most "native" instrument to us.

Finally, Social Games that don't suck

SimCity is back ... with social integration. The Sims Social was a huge hit so I'm excited to see how this turns out.

Ever since it first emerged, social gaming has been synonymous with the king of them all: Farmville,  which is a word for a category of "really crappy games that trick people into forking over cash". They are games blatantly engineered for profit, not fun. And amid the whirlwind of "social media" and "gamification" and all these other neologisms we've forgotten one basic fact...

Social Games are just multiplayer games.

remember this?
remember this?

And by that measure, they are nothing new. They've been around ever since the original Mario Bros. Arcade Game.Zynga had better be worried because their short-sighted business model of building disposable games is going to face serious resistance once quality game-makers turn their guns in this direction. Zynga's existence has thus far been an accident of arbitrage: when a new medium opens up, someone rushes in, sucks hapless users dry, burns out. 

What would you rather play: SimCity or CityVille? The Sims Social or YoVille? Just place those side by side.