A review of Algorithms To Live By by Brian Christian and Tom Griffiths.
You might expect that the only use you have for a computer scientist is designing a better app. But he might be able to help you design a better you.
Figure 1. Especially if you’re a cyborg!
Programmer Brian Christian and cognitive scientist Tom Griffiths argue that “There is a particular set of problems that all people face, problems that are a direct result of the fact that our lives are carried out in finite space and time… For more than half a century, computer scientists have been grappling with, and in many cases solving, the equivalents of these everyday dilemmas.” Their book, Algorithms to Live By, is a survey of solutions to computer problems that have insightful things to say about human problems, like finding a mate or organizing your closet. And don’t be intimidated by the jargon: Oxford defines “algorithm” as a “A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.” Our authors insist: “When you cook bread from a recipe, you’re following an algorithm. When you knit a sweater from a pattern, you’re following an algorithm.” What other sets of rules might be helpful?
Figure 2. The computer scientist approach to romance is really quite simple – make a few billion dollars developing an extremely popular app, then maybe follow closely the steps described below, and you, too, could marry a supermodel.
You could spend a lifetime searching for the perfect spouse or house; a computer could spend perhaps even longer time searching for the perfect solution to a query. To actually arrive at a good solution sometime soon, computer scientists have continuously explored the optimal time to stop and move on to other problems – and it turns out that there is a proven mathematical solution under certain conditions: review the first 37% of an expected total and then choose the next best option that is better than anything seen so far. Why does this work? Imagine you are trying to hire the best secretary – but with the condition that once you’ve dismissed a candidate, you cannot recall him.
“With just one applicant the problem is easy to solve—hire her! With two applicants, you have a 50/50 chance of success no matter what you do. You can hire the first applicant (who’ll turn out to be the best half the time), or dismiss the first and by default hire the second (who is also best half the time). Add a third applicant, and all of a sudden things get interesting. The odds if we hire at random are one-third, or 33%. With two applicants we could do no better than chance; with three, can we? It turns out we can, and it all comes down to what we do with the second interviewee. When we see the first applicant, we have no information—she’ll always appear to be the best yet. When we see the third applicant, we have no agency—we have to make an offer to the final applicant, since we’ve dismissed the others. But when we see the second applicant, we have a little bit of both: we know whether she’s better or worse than the first, and we have the freedom to either hire or dismiss her. What happens when we just hire her if she’s better than the first applicant, and dismiss her if she’s not? This turns out to be the best possible strategy when facing three applicants; using this approach it’s possible, surprisingly, to do just as well in the three-applicant problem as with two, choosing the best applicant exactly half the time.”
Notably, “Even when we act optimally in the secretary problem, we will still fail most of the time—that is, we won’t end up with the single best applicant in the pool.” In other words, your soulmate could have been in the first 37% of people you dated and you’ll never find another person better – or your soulmate could be the 39th person out of 100 you would have dated had you not married #38 because she was better than the previous 37. But, crucially, there’s actually no better method of improving your odds within the parameters. Of course, the parameters might not perfectly reflect the real world – if there’s a chance your best reject of the first 37% will take you back, then you could explore more options without trying to commit; but, on the other hand, if there’s a chance that your best options won’t actually agree to marry you (perhaps you’re in their first 37%) then you actually should explore less before trying to settle down.
A significant challenge comes if you don’t know the expected total: how can anyone guess, for example, the total number of people they’ll date in their lifetime? You could try to find an average or a proxy but the easiest expected total to examine is time. One way of looking at it is lifetime: if the average American male lives to 78, he has until just under 29 to explore and thereafter should marry his next best option; likewise, the average American woman lives to 81, suggesting she has until just under 30 to get the lay of the land – but this calculation does not take into account either the more narrow biological window for women having kids nor the economic reality that younger women tend to be able to date a larger pool of men. One might instead have a different model in mind: if one is absolutely determined to get married by a certain time – say 35 – then, factoring in only adult dating, the “better than before” optimal stopping age is 24.
Figure 3. If lifetime is the proper window, then things worked out neatly for me but my younger wife apparently should have held off for more observation. She assures me, however, that she was on the accelerated timeline.
Regardless, the implications of a time-based plan are twofold: first, you should explore a great variety of options in your initial phase so you know what kind of options are available to you and what you want; second, as your time window closes, you may need to lower your standards to succeed.
Inherently, the more you explore new options, the less likely they will be better than your familiar favorites – but everything that is currently a favorite was once new. So, front-load your exploration.Very amusingly, the mathematician who popularized this problem, Merrill Flood, introduced it in order to convince his daughter, who had just graduated high school but was in a serious relationship with an older man he did not approve of, to keep looking while she was still so young. Practically, if you’ve just moved to a city, you would do well to try out lots of different restaurants all at once – but by the time you’ve lived there for several years, the chances are that a new restaurant will not be as enjoyable as a favorite you’ve never had a bad experience with. Indeed, happiness researchers speculate that older people are happier partially because they’ve trimmed their social group and daily activities and spend most of their time with people they actually like doing things they want to do. In experiments where people try slot machines that have different pay-out rates than each other but the same over time, “people tend to over-explore—to favor the new disproportionately over the best.” But a dynamic situation – in which an individual slot machine’s pay-out rate differs over time – is impossible to perfectly optimize. That can be closer to actual life – but it does suggest that until your favorites disappoint, exploration after a certain point is overrated.
Indeed, optimal stopping is trying to prevent you from over-exploring: once you’ve seen 37% of a group, you tend to have a good sense of what’s out there. When you find the next best thing, don’t let the perfect be the enemy of the good: quit while you’re ahead. You should also beware the situation in which your exploration phase is meaningfully different from your decision phase – where, for whatever reason, the kind and quality of options you had available to you before are no longer available (and then perhaps you ought to reset the clock). If what you’re looking for is quantifiable, then the math is pretty straightforward – if you wanted to hire the secretary who was the best typist, regardless of other qualities, “the chance that our next applicant is in the 96th percentile or higher will always be 1 in 20… the decision of whether to stop comes down entirely to how many applicants we have left to see.” So, “when looking at the next-to-last applicant, the question becomes: is she above the 50th percentile? If yes, then hire her; if not, it’s worth rolling the dice on the last applicant instead, since her odds of being above the 50th percentile are 50/50 by definition.” For the quantifiable, the math rolls on easily: “you should choose the third-to-last applicant if she’s above the 69th percentile, the fourth-to-last applicant if she’s above the 78th, and so on, being more choosy the more applicants are left. No matter what, never hire someone who’s below average unless you’re totally out of options.” Notably, if you have this kind of specific information, your chance of getting the best option jumps considerably, up to 58% – and you might even adopt a rule that says if an option appears above a certain threshold, it should be instantly taken. For the less quantifiable, you might chance committing to a dream option in the exploration phase – but as the commitment phase drags on without an obviously superior option, you might need to reconsider what you’re looking for.
So far we’ve been considering searching for new intrigues like restaurants and romances but computer science can also tell us how to best organize (for best search) things stored in email inboxes, closets, and bookshelves – and it turns out that the best method for sorting is often not to sort at all. “Sorting something that you will never search is a complete waste; searching something you never sorted is merely inefficient.” How often are you trying to find a specific book on your personal shelves? Consider the opportunity cost: “we search with our quick eyes and sort with slow hands.” You should be especially wary as the size of the organizing task grows – the more stuff there is, the greater multiple of wrong places each item could be. Christian and Griffiths argue further that for your own inbox or computer, it’s often faster simply to use a search bar than to devote lots of ongoing time meticulously grouping everything into folders.
Figure 4. The post-laundry optimal sorting of “‘socks confound[s] me!’ confessed legendary cryptographer and Turing Award-winning computer scientist Ron Rivest… He was wearing sandals at the time.”
You may be determined to get organized – and maybe you have to because you’re running out of room. I’ve previously reviewed (and enjoyed) Marie Kondo’s Life Changing Magic of Tidying Up, the main advice of which is to only keep things that spark joy – but Kondo also has detailed instructions as to how to beautifully organize your closets according to the colors and types of items. Computer scientists reject beauty in favor of utility because there’s a significant trade-off between computer performance and storage capacity. And apparently innumerable studies have shown that the most efficient way to organize is to sort everything such that your most recently used items are the most convenient. Or, to put it in more familiar closet purging terminology, the winning question is: “When was the last time you wore it?” Notably, demonstrably less efficient means of purging include random removal, getting rid of the oldest stuff, and even getting rid of the least frequently used stuff (but computer scientists appear not yet to have tested the efficiency of “spark joy.”)
Figure 5. The computer scientist who arrived at the “least recently used” optimization was a Hungarian named Lazlo Belady who fled Hungary in 1956 with nothing more than “one change of underwear” and his thesis. In 1961, he managed to get into the United States but only with “his wife, an infant son, and $1,000.” Christian and Griffiths note: “It seems he had acquired a finely tuned sense of what to keep and what to leave behind by the time he found himself at IBM, working on cache eviction.”
An overarching theme of the book is to be computationally kind to yourself and others. “One of the implicit principles of computer science, as odd as it may sound, is that computation is bad: the underlying directive of any good algorithm is to minimize the labor of thought.” Humans can easily be overloaded: “With one ball in the air, there’s enough spare time while that ball is aloft for the juggler to toss some others upward as well. But what if the juggler takes on one more ball than he can handle? He doesn’t drop that ball; he drops everything. The whole system, quite literally, goes down.” Unfortunately, humans can’t simply get more RAM – “we’re stuck with what we got.” Computers avoid getting thrashed by doing fewer things at once and coalescing similar tasks to do together – which neatly echoes the best of self-help: focus on one thing at a time, be willing to say no to things outside your core interests, and group tasks to minimize switching costs. Work on whatever you are most passionate about, decline invitations to less important activities, and treat your email like your mailbox by checking it once a day. If that’s hard given your responsibilities, do your best:
“For your computer, the annoying interruption that it has to check on [is] you. You might not move the mouse for minutes or hours, but when you do, you expect to see the pointer on the screen move immediately, which means the machine expends a lot of effort simply checking in on you. The more frequently it checks on the mouse and keyboard, the quicker it can react when there is input, but the more context switches it has to do. So the rule that computer operating systems follow when deciding how long they can afford to dedicate themselves to some task is simple: as long as possible without seeming jittery or slow to the user…To find this balancing point, operating systems programmers have turned to psychology, mining papers in psychophysics for the exact number of milliseconds of delay it takes for a human brain to register lag or flicker. There is no point in attending to the user any more often than that… The moral is that you should try to stay on a single task as long as possible without decreasing your responsiveness below the minimum acceptable limit. Decide how responsive you need to be—and then, if you want to get things done, be no more responsive than that.”
“We can be ‘computationally kind’ to others by framing issues in terms that make the underlying computational problem easier.” You’ll get better results asking for something specific and easily answerable (“Are you available Tuesday at 2 PM?”) then something general that requires much more cognitive work (“When are you available in the next few weeks?”) Or to use another example: if a group of friends (or couple) is trying to decide where to eat or what movie to watch or what to do, the least empathetic answer is some punting version of “I don’t know, I’m flexible – what do you want to do?” which translates into “Here’s a problem, you handle it.” The much better option is to answer “Personally, I’m inclined toward x. What do you think?” Trying to guess what others want is one of the most difficult computational issues there is – help others out! Indeed, this is why you need to tell your loved ones what you want for Christmas.
Because of the prospect of overload, simple systems are very often much better than complex systems. “A theme that came up again and again in [their] interviews with computer scientists was: sometimes ‘good enough’ really is good enough” Harry Markowitz won the Nobel prize in economics for demonstrating that diversifying across risky assets could produce a superior, less risky return – but his personal investments were actually a very simple 50/50 split between US stocks and bonds and he did just fine. For most people whose heads hurt when they have to think about money, they’ll be far better off with a set-it-and-forget-it diversified self-balancing index than trying to manage it closely themselves.
Figure 6. You just need to be careful that you don’t “overfit” your goals by creating the wrong incentives for your organization or overzealously pursuing one thing at the expense of others (such as taking unhealthy steroids to build muscles.) A fear among futurists is that some mega-computer is going to someday be directed to maximize pencil production and then bulldoze everyone’s home for the wood (perhaps with you in it because you’re wasting pencils).
Consistent with the overarching theme of doing less computing, if you do need to solve a really hard problem, the experience of computer science would tell you to first make the problem less hard and solve that first. In “constraint relaxation,” ask yourself how you’d take on a challenge if you had unlimited resources or if you could instantly learn a new skill or whatever and you may very well find that you’re well on your way to figuring out how to overcome your issue.
Figure 7. Just don’t start executing as if the constraint doesn’t exist at all, a la South Park gnomes’ problematic plan: Step 1 – Collect underpants. Step 2 – ? Step 3 – Profit
Christian and Griffith have got lots of other advice from computer science – about how to best do your laundry (find “the single step that takes the least amount of time—the load that will wash or dry the quickest. If that shortest step involves the washer, plan to do that load first. If it involves the dryer, plan to do it last”), run auctions (everyone should submit their best price with the anticipation that the highest bidder will pay whatever the second highest bidder proposed), or manage your to do list (if a low priority thing is preventing a high priority thing, low becomes high – but 84% of scheduling problems have not settled on an optimal solution) They’ve got complaints – about sports tournaments (which are not optimized to ensure the best team prevails, for better or worse), about libraries and bookstores (which should display the most recently returned/bought books up front, not the newest acquisitions), and about people’s approach to gambling (slot machines are memoryless and so you can never improve your odds – because there is no optimal stopping point, you probably should never start). But let’s conclude where we began, with romance:
A game-theoretic argument for love would highlight one further point: marriage is a prisoner’s dilemma in which you get to choose the person with whom you’re in cahoots. This might seem like a small change, but it potentially has a big effect on the structure of the game you’re playing. If you knew that, for some reason, your partner in crime would be miserable if you weren’t around—the kind of misery even a million dollars couldn’t cure—then you’d worry much less about them defecting and leaving you to rot in jail.
Figure 8. Click here to acquire Christian and Griffith’s Algorithms to Live By (8/10). I very much appreciated the idea of trying to apply solutions in one field to a much broader array – and I wish there were more quality books from a variety of professions that did this. Note that if you are absolutely determined to organize your bookshelves, apparently the best algorithm is “mergesort” where you divide your books into approximately equal piles, ideally recruit as many volunteers as piles, sort each pile, then combine the piles two at a time.
Thanks for reading! If you enjoyed this, forward it to a friend: know anyone who is unmarried and wants to optimize their dating? Or anybody you try to do things with? How about someone with a closet of finite space?