What the Fuck is an Algorithm?
From abacus to artificial intelligence, computation may be pivotal in transforming humanity’s soul.
Algorithm can design the perfect bicycle lane system for any city. Algorithm may help predict the risk of Alzheimer’s. Experts debate the ethics of social media’s algorithm experiments on millions of users. Those are just a few of the day’s headlines.
The news cycles have been tossing around the word “algorithm”—with little explanation—for quite a while now. Algorithms are everywhere. And somehow, algorithms seem to be ruling the world.
The algorithm in daily life
These days, when applied as artificial intelligence (AI), algorithms prioritize Google search results, suggest Netflix shows for you, and push TikTok videos to your attention. Algorithms keep you hooked and get your hormones rushing. They identify who is banned for wrongthink on Twitter, and which of those people gets to appeal to actual humans.
Algorithms decide how much you, personally, pay for stuff on Amazon. They can stop credit card transactions on suspicion of fraud, and target individuals for IRS audits. They personalize advertisements when you peruse The New York Times or Fox News online.
They drive your Tesla, operate robots that assemble your smartphone, and decide what foods are displayed on premium shelf space in supermarkets. They read your license plate when you run a red light enabled with a traffic camera. All bookkeeping is now algorithmic; if you buy Prada shoes at Nordstrom, a financial algorithm distributes the revenue to vendors, taxes, commissions, and so on.
Some compare algorithms to recipes or driving instructions—step-by-step solutions. When you follow the steps, you get the desired outcome. But this analogy identifies only primitive procedures of little interest. What makes the algorithm fascinating are mathematical processes that handle calculations based on rules—that is, predefined logic.
“Algorithms are very simple instructions for computers,” says Poul Costinsky, an intelligence systems engineer in Seattle. “Computers are so much more stupid than a human. You must simplify the instruction so there is no fuzziness, no uncertainty. What humans think of as reasonable instructions are almost incomprehensible to computers.”
When you take these mathematical processes to the next level of complexity—for example, by coding 10,000 rules into one algorithm—the algorithm plays a sophisticated role in sifting through masses of data, expediting decision‑making by mitigating the ordeal of choice, and performing monotonous tasks on behalf of humanity.
The abacus and everything after
Humans have long taken advantage of implements of mathematical convenience that were forebears to the algorithm. In the beginning, there was the abacus. Also known as a counting frame, this calculating tool is nowadays used to teach arithmetic to small children. It was the first computer, a Sumerian invention, arriving in Mesopotamia around 5,000 years ago. Indigenous Americans had it, too. The Chinese abacus, with a design that enabled complex operations including square roots, surfaced around the 2nd century BC. (The more ancient Antikythera mechanism, a model of the solar system, could predict astronomical positions, but Costinsky says this tool is next to impossible to explain to non-programmers.)
The abacus was basically the first machine for processing numerical data. Instead of using your brain to do arithmetic, you served as a driver of a mechanical instrument that calculates the number. The instructions for crunching numbers on the abacus were an algorithm. “Just like the modern algorithm, the abacus freed the brain to do real work instead of busying it with menial work,” Costinsky says. “The effect on mathematical creativity must have been unimaginable.”
The next revolution in algorithms came in the early 19th century. Europeans got the idea that if you put clockwork-type gears together, you could build a machine that would follow similar instructions as those used with an abacus. The arithmometer, patented in 1820, was the first commercially successful mechanical calculator, and widely used until the early 1900s. The difference engine, designed by English polymath Charles Babbage, employed thousands of wheels and rods intended for the production of mathematical tables. It was never completed but this early adding-machine reduced the role of the human to turning the clock. According to Costinsky, the difference engine was the first truly self-contained computer.
The 20th century gave rise to the use of electronic devices—vacuum tubes—to replace mechanical gear. Such “flip-flop” electronic circuits that had two simple, stable states: off or on; zero or one; false or true. These became the basis of binary computing. From 1943 to 1945, the British employed special purpose vacuum tube computers, invented by English mathematician Alan Turing, to break Japanese and German ciphers. This set of computers, called Colossus, remained a secret for 30 years after the war. It is now regarded as the world’s first digital computer.
Turing is widely considered to be the father of theoretical computer science and AI, and formalized the concept of the algorithm with the Turing Machine, the first theoretical computational device—described by Turing in 1936. These first electronic computers could do several things. They responded to instructions, added, subtracted, multiplied, divided, and stored data. By storing data, it became possible to remember previous decisions made by an algorithm. “Now, it becomes mystical because humans can’t see the gears and see how it’s working,” says Costinsky. “But at least there were warm blinking lights of tubes, which is comforting.”
The innovation of “if” and “goto” operators
The biggest game changer in algorithms—instigated by Turing—was the introduction of the conditional operator “if.” All mechanical calculators can add, subtract, multiply, and divide. To understand the conditional operator, think of your thermostat with a sensor that measures temperature. If it goes above 70 degrees, it turns off the heat; if it goes below, the heat comes on. In computer programming, this ternary operator can be described as “if a then b otherwise c,” also known as the “if-then-else” construct.
The other relevant emergent operator, “goto,” allows you to repeat the same operation many times on different data. Costinsky explains that this operator is harder for a layperson to understand. Imagine that you have 1,000 coins sorted by weight and put in a line from left to right, he says. You know that the smallest coin weighs one quarter of an ounce and the biggest coin weighs two ounces. How do you find the one that weighs one ounce?
“Once you understand this problem, you can make a mental breakthrough from not being a programmer to being a programmer,” says Costinsky. “If you’re interviewing for a programmer job, you have two minutes to answer the question or you wash out.”
Many people start going from left to right or pick a coin at random. But it’s more straightforward that that, and much faster, using the fact that the coins are already sorted. You pick a coin in the middle and you weigh it. If it’s exactly one ounce—relevant to the “if” operator—you’ve solved it. If it weighs more than one ounce, it means the coin you’re looking for is to the left. If it weighs less than one ounce, the coin you’re looking for is on the right. In the first case, pick the next coin from the exact middle of the length of coins to the left of the first coin you picked. Repeat this “goto” operator, narrowing the possibilities by half every time. The typical scenario is you have 1,000 coins, then you have 500, then 250, 125, 62, 31, 15, 8, 4, 2, 1. (The method uses only whole numbers; you can’t look at half a coin, and it doesn’t matter if you round up or down.) If during any of those “goto” steps, the selected coin is equal to one ounce, you exit. Usually, you’ll find that one-ounce coin within ten steps.
“These operators contain all the instructions needed to break the Nazi code,” Costinsky says. “You’re almost ready to be a programmer. You know everything you need to know to write code to send man to the moon or post a cat video on a social network, deep down to the level of hardware. Add, subtract, multiply, divide, if, and goto—that’s all your computer understands. Nothing else.”
Easy, right? But most people think in terms of linear instructions, which don’t involve conditional statements and abstract instructions—known as loops—that make use of “if” and “goto” operators. Costinsky says the algorithm paradigm is extremely difficult to explain because it’s so obvious and the moment you understand it, you forget how it was when you didn’t understand it. “Think about riding a motorcycle—once you know how to ride one, you can’t explain it,” he says. “Or a bicycle. Forget motorcycle.”
The AI destiny of immortal algorithms
Computer code never dies, Costinsky reports. Stuff written 60 years ago still runs. Coding is so complex that legacy algorithms remain a substantial foundation of modern digital logic. Engineers write on top of old code because to start from scratch would take decades. Google runs on Unix, which is based on an AT&T family of operating systems, initially intended for use in telecommunications, that started being developed in 1969. Even Unix is based on old compilers in IBM hardware. “In the year 2022, many banking transactions take place on IBM mainframes from the 1960s, using code written in COBOL, an archaic programming language that few people still have competence in,” says Costinsky, “It goes all the way back to Turing.”
At this time in history, most algorithms that humans need have already been developed. Every programming language maintains libraries of functions for a variety of purposes (for example, searching, sorting, counting, manipulating text). Writing new algorithms is fading out, according to Costinsky. But AI employs algorithms, and this is why the mention of the algorithm is everywhere you turn. Nowadays, algorithms are used to train AI in a manner that resembles teaching constructs to a child. This is called machine learning and involves computers teaching computers.
Take the artificial neural network, a type of AI that teaches a machine to recognize underlying relationships in a set of data through a process that approximates how experts used to think the human brain works. Imagine a thousand memory cells connected in a set fashion. Then expose these processing units to ten million pictures—half of cats and half of dogs. Algorithms are employed to train the neural network to identify whether a picture is a cat or a dog.
“At first, unlearned neural network flips a coin, and in half the cases it will be right,” says Costinsky. “Every time it’s correct, the connections that led to the right answer become more important—or upgraded. The connections that fail to identify the picture correctly become downgraded. After a million pictures, most of the connections are set to mostly right weights, and the algorithm can correctly identify cat pictures in most cases.”
The artificial neural network makes it possible to go beyond human capabilities to solve practical problems. That sounds impressive but AI models don’t yet live up to some of the hype surrounding them. “Computers are incredibly stupid,” says Costinsky, “and they also lack context. Imagine the phrase, ‘I threw a brick into the window and it broke.’ What broke? Brick or window? To us it’s obvious, to algorithms it’s puzzling.”
Algorithms function poorly in nontypical circumstances. They have a weakness when it comes to prediction because they assume the future is a continuation of the past. Computer science doesn’t know how to train them to account for fundamental changes in underlying assumptions. When there is a paradigm shift, the performance of an algorithm-based system goes downhill.
Consider a self-driving car that performs well. Say it was trained in California, and then it’s driven in Florida and a hurricane hits. The car wasn’t trained to adapt to a hurricane. “We just show the system different situations, and it learn from those situations,” says Costinsky. “If it is not exposed to a particular situation, it doesn’t learn. The best case scenario in the example of the hurricane is that the self-driving car gives up.”
An algorithm is similarly compromised when it comes to interpreting the decision it took; interpreting is one of the loftiest goals of machine learning as a discipline. “Nobody knows exactly why a neural net makes a decision,” says Costinsky. “It just does. It is more akin to a swordsman in tune with the battle than with a chess master reasoning about strategy.”
If algorithms are so smart—one might ask—why do people see the same posts repeatedly in their social media feeds? According to Costinsky, the algorithms think that’s what you want to see and, in most cases, they’re right. When they’re wrong, it’s called a “false positive.” When an algorithm predicts incorrectly on social media, probably no one gets hurt, but organizations employ similar models to flag fraud, for example. Poor outcomes in those circumstances can lead to real-life harm.
Humans succumb to “confirmation bias,” too, and tend to better remember instances when they’ve been annoyed by algorithmic work product. Some people think they want to see their friends’ posts in the order they were posted, but algorithms know better, and this is supported by an enormous body of research. With social media, algorithms continuously test some percentage of users on their feed preferences. It’s been proven that a simple, linear feed of posts causes a drop of attention.
“Many people think social network apps are listening to them when it shows ads relevant to a conversation they had,” says Costinsky. “In reality, it’s the algorithms that predict what you are likely to think of next—based on what you are now reading. The fact that you said it out loud is pure coincidence. We humans are predictable chemical computers.”
The speculative future of the algorithm
One purpose of AI is to liberate people from routine work. Computers can do better than humans in some tasks because of the ability to scale inference. That is, algorithms can train a model to recognize patterns and reach conclusions by undergoing billions of inference iterations. For example, an algorithm-conditioned computer is better at detecting cancer in x‑rays than are doctors. It sees more data than can the naked eye.
Humanity puts a lot of faith in numbers because, Costinsky says, math is repeatable, provable, and doesn’t depend on anyone’s opinions. If today’s civilization were wiped out and a new one appeared, it would apply mathematics in a very similar fashion. He calls math the “God language.”
And for those wondering—indeed—about what might happen to humanity, consider the possibility that AI evolves to perceive mankind as an infestation, and sets out to exterminate it. “Human beings are a disease,” said Agent Smith, an AI program personified as the antagonist in “The Matrix,” during his interrogation of Morpheus, “a cancer of this planet. You are a plague. We are the cure.”
Some cite the Fermi paradox, which asserts that—given what we know about the universe—extraterrestrials likely exist and travel through space, and yet there is no evidence of them. Maybe AI is exterminating its biological creators, Costinsky posits, leaving none to make contact with Earthlings, who the AI entities view as just another infestation. This is one logical unfolding of algorithmic adaptation.
“The thing about AI entities is that we cannot guess their motivations,” says Costinsky. “Our intuition is useless here and we don’t have the data for an informed guess either.”
AI entities could have a different concept of time, for example, and be unconcerned about traveling for a million years. Their idea of a habitable planet may vary from that of humans—with needs such as nuclear fusion energy and a climate that doesn’t melt their circuits. (Jupiter might be ideal.) Says Costinsky, “AI is also likely to evolve into quantum computing, and we don’t know what that means.”
Perhaps it’s just as possible that a logical conclusion of algorithmic development—progressed from the ancient abacus—delivers an AI quintessence that exalts the human spirit. Either way, you’ll have little trouble finding headlines on any day about novel algorithms that can peer into the future, or attain measureless other competencies.
“In old science fiction, you would just talk to a computer and tell it what to do,” says Costinsky. “And we will get there. We are not very far, by the way.”
©2022 Anderson
Enjoyed and enlightened. but still unable to say, "it’s so obvious and the moment you understand it, you forget how it was when you didn’t understand it". Keep up the interesting good work.
Your essay gets close to what I believe are the true purposes of AI and algorithms, I have a post which must be edited more before I can publish. It is pretty dense and maybe past the patience of my readers to tolerate.