[From the archives] Book Review: The Ethical Algorithm

This is lifted and reposted from my old blog that's not up anymore. I don't use twitter/X/whatever anymore. Now for the post.


image.png

The Ethical Algorithm by Michael Kearns and Aaron Roth is a thought-provoking book that touches upon how algorithms are shaping society. I expected it to be a bit like Netflix's The Social Dilemma, that is, a giant red flag of how the age of algorithms is ruining society. Instead, I got a forward thinking book that's full of practical ideas, many that can be (and are) applied right now. Refreshingly, the explanations are crisp, and the authors don't assume prior knowledge in either computer science, statistics or ethics. It's definitely worth reading to help understand the technological crossroads our society is in.

The author is quite open about how this area is still nascent. Many of the challenges we face remain unanswered. Despite that, I came away with the hope that algorithms don't just need "fixing" to be more ethical. We can use algorithms to actively create more ethical outcomes, and in turn a fairer and happier society.

And I have to say, the blend of computer science, statistics, game theory and ethics made for an enjoyable read. I shall give a summary of some of the ideas that I found interesting.

Algorithmic Anxiety

Data is a double-edged sword—we are able to get great insights to understand how society works, and improve public health, municipal services and consumer products. At the same time, we are the data, and that means all these services tracking things about you can be corroborated to identify you (more on this to come). The authors cite an example, where The New York Times managed to identify Lisa Magrin, a forty year old math teacher, based on publicly available commercial datasets, just by tracking movements of people. Other things they found out, just by seeing her movements, were that she visited a weight loss center, a dermatologist, and other details that should not be public knowledge.

It's not just privacy that we have to worry about—algorithms are increasingly used to make more important decisions, beyond recommending things to buy or filling up your social media feed. They are being deployed for automating credit card and loan applications, criminal sentencing decisions and hiring. The authors say that there is no going back technologically here, and we need to instead actively encode ethical principles directly into these algorithms. As skeptical as I was when I read this, I have to say, the ideas are actually pretty solid and sensible.

It also made me realize that the authoritarian algorithmic dystopia described by Yuval Noah Harari in "21 Lessons for the 21st Century" is not as inevitable as he made it sound.

The authors also mention the problem with Machine Learning, mainly the fact that they are meta-algorithms. We know quite clearly how to create models, but often we have no understanding of the models that are produced. They are the result of us optimizing an objective function, and we cannot know of the unintended side effects. For example, we might encode sexism into our hiring decision model, because the model was trained on historical data, and hiring has typically been quite sexist. Alternatively, it may be that the most accurate model may not be the fairest or the most privacy respecting. So we need to avoid getting monkey paw'd by explicitly encoding other ethical criteria for training models.

"Anonymized data isn't"

Anonymous data is usually not anonymous in practice. The authors give a couple of examples of this. We can take "anonymized" data, that is, datasets with the names redacted or pincodes partially obscured, and compare with other publicly available data (since there is so much of it) and use that to get all the information. Here's an excerpt:

Latanya Sweeney, who was a PhD student at MIT at the time, was skeptical. To make her point, she set out to find William Weld’s medical records from the “anonymous” data release. She spent $20 to purchase the voter rolls for the city of Cambridge, Massachusetts, where she knew that the governor lived. This dataset contained (among other things) the name, address, zip code, birthdate, and sex of every Cambridge voter—including William Weld’s. Once she had this information, the rest was easy. As it turned out, only six people in Cambridge shared the governor’s birthday. Of these six, three were men. And of these three, only one lived in the governor’s zip code. So the “anonymized” record corresponding to William Weld’s combination of birthdate, sex, and zip code was unique: Sweeney had identified the governor’s medical records. She sent them to his office.

Another example was how some researchers de-anonymized Netflix data (released for a contest), which only had ratings and a coarse timing of when a movie was rated, by corroborating the ratings with IMDB users and ratings. They could find out sensitive information based on movies they watched, such as their political leanings and sexuality.

So quite clearly, obscuring data attributes (called k-anonymity, ie, for k attributes being anonymized) is not an effective solution on its own. The authors instead talk about a far more effective solution—the idea of differential privacy.

Differential Privacy

The main idea of differential privacy comes from the fact that we still want the insights from the data to be useful, we just do not care about the information coming from any singular data point. For example, we may want to learn that smoking is harmful for your health, by surveying smokers and non-smokers, but we do not want to know that Dr Roger is a smoker (who may now be subjected to higher insurance premiums thanks to our findings).

So the crux of it is: we can ensure privacy by ensuring that any one data point does not really affect the outcome. If our study still concluded that smoking is harmful, even if Dr Roger happened to be a non-smoker, we can get the insight that we need without violating privacy. Dr Roger's inclusion in this dataset does not necessarily reveal whether he is a smoker or not. The probability that an individual's data can change the outcome is a very measurable quantity, and we can take this as our "privacy knob". This guarantee of privacy lets us also get insights we normally would not due to people wanting to protect their privacy and refusing their participation in studies.

How to conduct embarrassing polls

The authors present an example of conducting a poll to find out how many Philadelphian men cheated on their wives. One not so good way is to sample a population and ask them. This will probably give inaccurate results as people will lie about this kind of a thing. But what if we did something called a randomized response. Ask them to toss a coin. If it comes to heads, tell the truth. If they get a tails, flip the coin again and say yes for heads and no for tails. Now there is no way for us to know if someone's answer was a truth or a lie. But because we know how the error is introduced, we can now work backwards from the survey results to get our actual findings.

So if 41.6% of the responses are "yes", we would know the actual number of cheats. Since 3/4 of the responses will be truthful, we can say x * 3/4 + (1-x) * 1/4 = 0.416, where x is the real number of cheats. We can solve to get x = 33.2%, ie, around 1/3 of the population are actual cheats.

We managed to get information about the aggregate without violating the privacy of any individual. Of course the addition of random noise while taking the survey means that this result is not totally accurate, but it is close enough when the population is large enough, and our answer is fairly accurate.

This randomized response technique is quite old, but it happens to satisfy differential privacy properties. We could design the experiment to add more noise to the responses, but they come at the expense of accuracy, for the same sample size.

Another useful property of differential privacy is that it is composable. You can have a differentially private output of one algorithm as the input to the next—and still maintain differential privacy, without needing to reason about the privacy of this new model separately. This lets us have building blocks that can create larger and more complex algorithms. Pretty much any algorithm whose success is defined by the underlying distribution and facts of the world rather than the dataset itself can be made differentially private. A lot of our existing algorithms can be repurposed to something more privacy-friendly.

We don't always have to add noise via randomized response—we can add random noise even after our data has been already collected—albeit this is less "safe" in case of a breach. Google and Apple have been collecting recent telemetry via the former, which is encouraging. It is a problem for a lot of tech departments to convince engineers to switch to more private models for older data though—we lose accuracy and don't appear to gain much. The authors argue that framing differential privacy should not be viewed as an obligation that degrades existing analyses. It, in fact, opens new possibilities of collecting and analyzing sensitive data that was previously out of bounds.

Spooky Correlations

However there are still some things differential privacy does not promise. If the information about the aggregate is still sensitive, it is of no use. An example given by the authors was from the app, Strava, which aggregates popular running routes across the world, by collecting data from Fitbits and smart watches. Unfortunately, some routes happened to be in war-torn regions in the middle-east, where no one owned these devices or used Strava... except the US military personnel deployed there. Their secret military bases were compromised. Yikes.

There are other examples. The proliferation of data and machine learning models can help us make correlations based on facts that seem completely unrelated. The authors cite 'hunch', an app that just saw your twitter followers, but not your tweets and asked a few innocuous questions. Based on this data, and known correlations, it could guess people's political stances, whether they owned a brokerage account, how often you wore fragrance or if you ever kept a journal, all with a shocking 85% accuracy. By learning facts about the world, it could infer sensitive information.

There are even questions about how much ownership one has over their private data. The Golden State Killer was tracked down in 2018, despite his crime spree ending in 1986. This was possible due to his family members uploading their genetic data voluntarily in public genealogy databases. Law enforcement could then match their DNA samples with that of his relatives and use that to track down and catch the killer. Because DNA is private, but also shared with other family members, it is never fully in your control.

Algorithmic Fairness

The authors start out with one example where fairness can advertently go wrong. They bring up the example of Google's word2vec that used words embedded as vectors, and used similarity measures to solve problems like "apple is to fruit as carrot is to ...". The similarity measure is based on how often the two words occur together. The way it roughly works is by completing the parallelogram of the vector that is drawn from the three words, and the corner it closes towards completes the analogy. It works wonderfully, until it starts producing sexist results such as "Man is to Computer Programmer as Woman is to Homemaker". The issue is that it managed to learn the sexism (and other isms) that we have had in all our texts.

The fix for this is to explicitly embed information to tell if words are gendered or non-gendered. If the analogy tries to associate a non-gendered word with a gendered word, we "subtract off" the biased result. This still lets us keep things like "man is to king as woman is to queen".

At this point a pattern is emerging about having fair algorithms. We cannot ignore attributes like race and gender if we want fair algorithms. They have to be explicitly accounted for. The authors argue that it is a bad solution to ignore attributes such as gender and race when designing algorithms like education loan decisions, for example. Remember the spooky correlations we saw? If we train our models without including those attributes (which often happens as it is seen as controversial to do so), the model learns about those characteristics through correlation anyway.

Messy, Necessary Tradeoffs

The authors take the loan example further. Let's assume there are two fictional races, Circles and Squares, Circles being a richer, more affluent majority, while Squares being the less privileged minority. There's different ways to define fairness here. One starting point is statistical parity, that is, giving equal fractions of both populations the loan. One algorithm to do this would be to randomly grant loans to both populations in equal proportions. As bad as the resulting accuracy would be, it would at least be fair, and the authors note these kinds of techniques are suitable for exploration, which involves collecting data to learn about the population and accordingly figure out the fairness and accuracy criteria.

Now to exploit the data for actually doing good business is a bit messier. Suppose we discover that Squares are indeed likely to repay loans less, say 15% of the population of Squares as opposed to 30% of the Circle population. Now if we want to maintain statistical parity, we either have to reject 15% of creditworthy Circles (which now makes it unjust for those Circles) or we could accept 15% more defaulting Squares (which increases the lender's losses). There's another way of looking at fairness that does not have this problem. We could distribute the "mistakes" the algorithm makes evenly across both groups, that is, creditworthy Circles and creditworthy Squares are denied loans in equal proportions (equality of false negatives). We could also have an equality of false positives if that case is the less harmful one. This obviously is not an exciting solution either because while it is fair, it is also unjust, and it brings no comfort to a falsely rejected Square that a Circle is equally being denied loans as well.

Yet another way of measuring fairness is to equalize the repayment rate. The problem this solves is that it prevents banks from treating the outputs differently for the case where it might seem that Circles make more money for them than Squares.

Here's what sours the soup—we cannot necessarily combine these fairness criteria. There have been mathematical theorems that demonstrate this. We have to explicitly make these uncomfortable tradeoffs. The truth is that these tradeoffs were always there, and were being made implicitly. Algorithms require us to face this reality and be more explicit about what kind of fairness we want to have.

Reservations
(Note: This part is rather visual, and better understood when you read the book)

The authors bring up the point about being explicit in a (simplified) example that quite strongly mirrors the reservation system in India, and the debates surrounding it. Say we have a model that predicts whether a Square or a Circle will get accepted at an institute called St. Fairness. We could have a model that sets a cutoff score that gives the best accuracy of prediction. Having a model that optimizes the least errors ends up being really unfair to Squares. This is because, Circles, being more affluent can splurge on SAT coaching and spend on mock tests and be better prepared overall. So naturally they get better scores (and more attempts). But this does not mean that the many lower-scoring Squares will do worse in college.

Maybe we could have a lower cutoff score, that leads to more inaccurate predictions, but at least does not unfairly reject as many Squares. But fundamentally, the problem is that the circumstances are quite different for both Squares and Circles. So one suggestion is to have two models, one for each group, that gives two cutoff scores. This combined model showed a vast improvement in fairness, while still having a reasonably good accuracy. The authors note that such an algorithm that explicitly models these races separately would be quite controversial, and even illegal in some countries. Messy.

One thing we learn for sure though, is that we need to look for a model that is not error-dominated. Often you can improve on fairness without hurting accuracy and vice-versa. Then, there will be a certain limit, where we have to make a tradeoff between tradeoff and accuracy. This limit can be plotted on a curve of error and unfairness, and gives us all the optimal models for our two criteria. Now it's up to our discretion to choose which tradeoff needs to be made on this curve, called a Pareto curve.

Fairness Gerrymandering

And of course, fairness and accuracy is not the only tradeoff. Different measures of fairness are at odds with each other. Sometimes our fairness measures are not explicit enough, as in the case of a phenomenon called fairness gerrymandering.

The example given for this is as follows. Suppose you have to hand out twenty tickets to see the Pope, among Circles and Squares, who may be male or female, and you want a fair distribution of people getting the tickets. Ideally we would have four tickets each for Male Squares, Circle Females, Circle Males and Square Females. But a machine learning model asked to be fair to gender and race separately, might give 10 tickets to Circle Males and Square Females each and exclude the other groups. It still met the criteria of equal distribution of gender and race, but hurt people at the intersections.

The fix? We could have an adversarial algorithm consisting of a Learner and Regulator, where the Learner sends out a model and a Regulator assesses the fairness for various group intersections. This setup mitigates the problem quite a bit.

But you can see how things get complicated. There are lots of intersections to be made. We need to account for things like income, race, gender, ability and many more things. At this point, it's also worth considering that we could have models that might be unfair to individuals, rather than groups. Harari warned of this as well in his book. We do not yet have a solution for how to make algorithms that do not discriminate against individuals without wrecking havoc on accuracy.

Garbage in, Garbage out

Another thing that should be addressed in the discussion of fairness is that even with all this trickery, our model is as good as the data we give it. If our data is coming from the result of inherent bias, like with admissions historically favouring Circles because they are better understood by the largely Circle committees, our model will always be flawed no matter what. We can also create dangerous feedback loops. An example being police deployment data. Historically, due to previous biases, police might be deployed more to town A rather than town B. As a result of having more policemen in town A, more crime is detected, thus confirming the flawed hypothesis and making the model seem accurate. This data is again fed back, and the problem is exacerbated. Yikes. Often we need to look at the systemic view that drives behaviours. Which brings us to the next part of the book.

Games People Play with Algorithms

The authors introduce the basic idea of game theory. Why is it relevant to us? A lot of algorithms affect collective behaviour, and can be modelled as a game where the actors are all trying to achieve a specific goal. The example they start out with is a dating app called Coffee Meets Bagel. A journalist using this app had set an option, stating that she had no racial preferences for the match suggestions. Weirdly enough, she was now being matched almost exclusively with Asian men. How did that happen? Turns out a lot of the other users set racial preferences to exclude Asian men. This led to people like the journalist, who had no preference to only be flooded with Asian matches. Now the only way out of this, is for her to also start excluding Asians, which perpetuates the problem. This is an example of a bad equilibrium.

What makes this example noteworthy is that the algorithm isn't really at fault here. It's doing what it says on the tin, that is, obeying the user's preferences. The behaviour of some users, though, is affecting the collective outcome. We need to thus design algorithms that optimize for the collective rather than individuals. This is where algorithmic game theory comes into play.

The Commuting Game

The authors bring up the example of commuting to see how we can tackle systems better. Commuting is a game where the goal of all commuters is to reach their destination in the shortest time possible.

Apps like Google Maps and Waze work by seeing where all users are and determining traffic based on this data. This way it suggests to each user a route that should take them to the destination the quickest. This turns out not to be the most optimal solution. The authors demonstrate this with a toy example that might be a bit too involved for this summary, but the main idea is you have two routes, one that is fast, but slows down as more cars go on it, and one that always takes a fixed time, which is slow. The fast route at its slowest is as slow as the fixed-time route.

For Google Maps, since it only cares about achieving an individual commuter's goal, it will always suggest the fast route to everyone, as it will always be faster than the fixed-time route, no matter how much traffic is on it. This is actually not the optimal solution for the collective. If we had sent a smaller fraction to the fast route, the average commuting time actually would decrease. This is better for the collective as there is less time overall being spent by cars on the road.

It is possible to optimize for the collective if we model roads as equations, even higher order ones, and run algorithms that optimize for the average time. There are issues with the incentives though.

Gaming the Game

It is well documented that Google Maps and Waze can be manipulated. Some people will give false information about traffic jams to improve their commuting times. But the incentives to follow the improved navigation algorithm is even lower. As an individual, you might be asked to take the slower route, so that the others are benefitted. Many might ignore these suggestions, rendering the algorithm useless. People may not account for the fact that over many commutes, listening to the algorithm's suggestion will give better outcomes than individual-oriented algorithms.

There is one possible weird trick to get people on-board and prevent manipulation of routes. We can use differential privacy. Recall the main property—the aggregate outcome of a model is not affected much by any one individual data point. Now if a user defects or manipulates the data, the route calculation should not be affected. This leaves us with only one reasonable response—follow the route.

Although there is a better way around manipulating the game down the line. Have a centrally coordinated system of autonomous vehicles and take the route choice away from the drivers. No different from flights today in a way.

Echo Chambers

The authors then discuss a topic that finds itself in the press quite a bit—the existence of algorithmically determined echo chambers. They give an example of a basic recommendation system for books. This works through a process called collaborative filtering. The basic idea is you see user ratings. So to recommend a book to Priya, you'd look at the highly rated books by users who are similar to Priya, ie, they have liked the same books as Priya in the past. If you plot out ratings from "similar" users, you'd get clusters around certain books. You make recommendations by picking from these clusters. A nice property of these clusters is, we do not have to understand or categorize them, we just know these are correlated somehow to a certain "type" of user. We can then recommend pretty much anything outside of books—just look at what users of the same "type" are doing.

Is this a bad selfish equilibrium? In the case of shopping recommendations, it's not too bad, since your shopping habits don't really affect others. There are consequences though. These algorithms are nudging certain behaviours, even if users may choose to ignore them. And it gets messier when this is applied to things like social media, where users are shown content that only conforms with their worldview and pushes our society to divisiveness and polarization. These are definitely bad equilibria.

It's possible to inject more diversity in these recommendations. Remember the "similarity" measures to build the clusters? We can increase the distance to diversify the recommendations. We can also deliberately recommend a clearly labelled section of "Opposing Viewpoints". Of all the alternatives presented in the book, this seems surprisingly easy. I am not sure why this isn't the norm with social media already. I wonder if it's been tried and shown to lower engagement? I will add though, the "Opposing Viewpoints" idea is implicitly what Twitter seems to thrive on, and it's not quite pleasant, to put it subtly.

Stable vs Unstable Equilibria

We then get an example that demonstrates the idea that some equilibria are unstable, which means that one or more of the actors have an incentive to defect or change their decision, which would lead to chaos and instability. They do this with an example of matching students to medical schools, where both have their ranked list of preferences. The example has a lot of details that I won't go into here, but the main point is that we prefer a solution where no one has any incentive to defect on their choices. It is the Pareto-optimal solution, which means that there is no way to improve anyone's outcome without affecting someone else.

It's important to note that while a stable matching is the most optimal one, it does not mean that everyone will be happy with their outcome ("the body of a hanged man is in equilibrium"). The authors then propose an algorithm to reach a stable matching, called the Gale-Shapley algorithm, through a rather whimsical lens of Victorian courtship.

Algorithmic Mind Games

The last bit of algorithmic game theory is in a different context—the design of algorithms themselves. Remember how we fixed fairness gerrymandering? We have a Learner and a Regulator, who play a game where they try to reach their respective objectives (accuracy for the Learner, fairness for the Regulator). They reach a Nash equilibrium, which is a compromise of the two.

There are other similar games we make algorithms play. A lot of the AI used to play video games, Chess, Go or Backgammon work through a game it plays against itself. All it knows at the beginning are legal moves. The first few times, it plays awfully, but it takes this data and improves itself, and has the better version play itself again.

We also have Generator Adversarial Networks, which have grabbed attention because it enables synthetic generation of data and "deepfakes". Similar to our first example, there is a Generator that generates random synthetic data, and a Discriminator that tries to tell apart synthetic data from the real thing it is supposed to imitate. Both of these are learning to the point where the Generator produces realistic data (for example, cat pictures) while the Discriminator gets really good at telling them apart (upto a certain point).

The authors note the potential applications for good—using differentially private discriminator will allow us to create synthetic medical records that are useful for learning and training other models without compromising user privacy.

Almost cheekily the authors then segue to another kind of "game"—the one played by researchers to hoodwink you (or themselves).

Led Astray by Data

This part of the book gave me strong Taleb vibes. It shows how it's easy to fool around with—and get fooled by data. They start with an example that talks about an email scam. I get a stock tip, telling me that Lyft stocks are going to go up, and that I should buy. I'd ignore this and move it to the spam folder. The next day the prediction comes true, which again might have been pure chance, so I ignore it. But then emails come every day for the next ten days, and it's been correct everyday. Now I'm listening. So I buy this spammer's product, which is more of these stock tips. Soon enough their predictions are hot garbage, as bad as random. How did that happen?

Turns out it's a combination of scale and adaptivity. The spammer sent out a million such emails, where half the emails predict an increase, and the other half predict a decline in stock price. Half of these predictions will turn out right. This is repeated the next day with the people who received the emails that turned out right. By day eleven, we'd still end up with 1000 people who have received all correct predictions. I happened to be one of the chumps in the author's example.

This use of optionality is also what is plaguing modern scientific research. The author's recount examples of junk science that fooled people several times, such as the popular "Power Poses" research that led to a TED talk. Other examples are studies in psychology on "priming", which are claims like, "reading old people words such as wrinkles, bingo, or Florida make you walk slower". Far too many studies on diet and psychology fall into this category, and like the authors, I am very skeptical when I read a scientific claim in those fields.

So how do we game scientific research? One famous way is to do something called p-hacking. Quite similar to the email scam, we repeat the experiment several times and report only the most interesting findings, until we get a p-value within a required threshold. Much like the email scam, we rely on random chance, but only keep the outcomes that show an interesting finding (which does not really exist).

Scientists are driven to such unethical practices because the incentives are bad—only positive results and "unexpected" findings are usually what end up being published in prestigious journals. Even if there is no p-hacking going on, we get a very skewed subset of research that is published. It's not even that we need a rogue scientist to p-hack. Even if a thousand scientists perform the same experiment sincerely, the one that yields the surprising (but probably outlier) result is the one that will get published.

The authors also recount how Baidu cheated in Google's ImageNet competition, by creating multiple accounts to validate their models over the limit of twice per week. This is also an instance of a similar kind of problem, where they are learning and adapting backward from the result.

One simple fix for p-hacking (and in general, the multiple comparisons problem) is the Bonferroni correction, where we multiply the probability by the total number of experiments we are conducting. Another way of looking at it is dividing the required p-value threshold by the number of hypotheses. This intuitively makes sense although the way it's framed in the book is a bit puzzling to me. They give an example where they say: suppose the chance of getting twenty heads in a row for coin tosses is one in a million (which it isn't exactly, but okay for a toy example), then the probability that this event occurs over k experiments is at most k * p = 1.0 in this case. What I find weird is if we did the experiment 2 million times, would this mean we get a probability of at most 2.0? Doesn't seem right. (I might be having a brain fade right now as well, so @ me on twitter if you have an explanation).

Either way, applying this Bonferroni correction on Baidu's example would have still confirmed Baidu's claim over the accuracy. Yikes. The issue here is the adaptivity that isn't accounted for. It's not the same experiment happening many times, Baidu tweaks the algorithm to improve the validity accuracy every time they submit it. It's trying to fit the model to quirks of the testing data, not to the real world.

The Garden of Forking Paths

"The garden of forking paths", coined by Andrew Gelman and Eric Loken refers to adaptively choosing the test or model based on the hypothesis. You can extensively explore and try on different models and pick the one that suits your claim well. This is easy to do accidentally. I have even observed, in my time at my University, there was no shortage of this kind of thing. This false discovery is harmful and expensive, and the source of a lot of misinformation. It has only been exacerbated with the explosion of data.

There are a few solutions the authors propose. The first one is in effect in quite a few journals. It's called "pre-registration", where the researcher has to share their methodology and hypothesis before they carry out their experiment, ideally before they even gather data. This shuts them off the garden of forking paths, and forces them to commit to their claim. While this is certainly effective, it is also a bit too conservative, as it prevents any kind of exploration. Many important breakthroughs in science are often the result of inspiration from previously known results, and it is quite difficult to gather new data every time. Reusing old data creates another fork in the path.

The other alternative proposed by the authors is a special interface called "The Garden's Gatekeeper". Instead of unrestricted access to data, the researcher is given access to a constrained interface, where they can ask questions relevant to an experimental procedure. The answers are produced in such a way that all the useful answers received can be summarized in a description much smaller than all possible hypotheses that exist. We only have to apply a correction for this smaller set of questions.

An example of how that would look is as follows. In the ImageNet competition, the interface would only tell the researcher if their model advances the state of the art by 1% or higher. This makes it impossible to have a description of greater than hundred models, much smaller than the million models received by the competition. I won't get into the details of how this works, as the book conveys it better and in more detail.

The Other Stuff is Messy

The authors spend the last part of the book addressing the shortcomings of what they have discussed so far. While we were able to get reasonable quantitative definitions for privacy and fairness, it has not been so for other things we care about to make algorithms ethical. Stuff like interpretability, accountability, and morality.

Interpretability is a subjective thing. It largely depends on your audience. To the layman, most statistical models will not be interpretable, while to a student of classical statistics, it might be easy to interpret. To that same student, a deep learning model will not be interpretable, while the same model might be more transparent to a deep learning researcher. One way to deal with this is to first understand the target group of observers, test their understanding of models of various complexity, and use the result of this to design a model that meets the criteria of interpretability.

The authors also note that it's possible to understand black-box models too. Since a model can take all possible inputs, we can always tweak someone's features to figure out why their home loan application was rejected.

The other issue the authors bring up is that of morality. Are there some decisions that are off-limits for algorithms to make? There are various trolley problems involving self-driving cars, and the possibility of AI being used for warfare to kill people. The authors note that there is a case for not quantifying these things into algorithms, as we have with privacy or fairness. But this would come at the cost of other things, like for example, the military AI might be better at differentiating civilians and targets than humans, so we would risk more innocent lives if we don't use such a system.

To take off into even more abstract territory, there is the question of an AI singularity which may choose to wipe out the human race. There is the idea that AI will reach a point where it will surpass human intelligence, and because of that, it can design AI that is more intelligent than anything a human might come up with. An algorithm playing a game with itself can set off a chain reaction and improve exponentially. The authors note that this point is a very debatable one and the growth might not be exponential, but even if it is, we don't know how much (even 1% growth is exponential).

The other thing we must be wary of is the fact that algorithms that optimize for a specific objective might lead to unintended consequences. Think of how fairness gerrymandering came about, and stretch it to the extreme. An algorithm might start doing dangerous things to achieve its objectives, like messing with the world's economy and preventing people from turning off the machine.

Concluding Thoughts

This book was a fun read. I don't know if I should have a rating system. For now I'll stick to binary-ish recommend/don't recommend. And this book, I'd definitely recommend to anyone who has an interest in computers, AI, statistics and what our future holds. Although I do feel that the ideas that I found the most interesting here are probably well-known to a lot of engineers and researchers involved in AI. I don't know.

Appendix: Why did I write this review?

Surprisingly, I read this book because it was the reference material for one of the courses I picked in college this semester called "Ethical Algorithm Design". Even though this book was fun, I do not recommend anyone from PES University to take this course. I can explain this privately, for those in my college who are interested, just DM me on twitter or email me or something.

Anyway, writing this review was the only way I could get myself to half-prepare for the exam that is coming up soon. I don't enjoy studying towards a test, but I like writing blogs. Win-win.