The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
This was a fascinating read about a little bit of the history or the current AI hype and many of its problems. It did a fantastic job of digging into what the different tools being hyped actually are as well as the basics on how they work, and why they aren't as "powerful" as they seem. They point out the issues many of these tools have with built-in biases, the costs of the creatives behind the tooling and many of the downstream negative effects.
As the ending of chapter 4 put it.
Input a set of symptoms, and what comes out looks like a diagnosis. Input a legal query, and what comes out looks like a contract or a legal brief. Input a school subject and a request for a lesson plan on literally anything, and what comes out will look like a set of facts that you can teach students and exercises to have them do. We empathize with the people on the ground, teachers, physician's assistants, and paralegals, among many other professionals faced with great need, and insufficient resources, wanting to believe that these systems actually work. But we have no empathy for powerful interests looking to shirk taxes, nor the forces within government who respond by shredding the social safety net and pushing so-called AI as a cheap replacement. And we have nothing but scorn for the would be creators of AI, tech philanthropists, and their allies who claim to be acting in the interests of everyone, pointing to real needs in the world and selling their tech as a solution. But that solution is only poor facsimiles of welfare services, healthcare providers, legal aid, and educators. Facsimiles that the tech barons would never rely on for their own families. Just because you've identified a social problem doesn't mean LLMs or any other kind of so-called AI are a solution. When someone says so, the problem is usually better understood by widening the lens, looking at if from its broader context. As Shankar Narayan, the tech and liberty project director for ACLU of Washington asked regarding biased recidivism prediction systems, "Why are we asking who is most likely to reoffend, rather than what do these people need to give theme the best chance of not reoffending." Likewise, when someone suggest a robodoctor, robotherapist, or a roboteacher, we should ask, "Why isn't there enough money for public clinics, mental health counseling, and schools?". Text synthesis machines can't fill holes in the social fabric. We need people, political will, and resources.
I'd recommend this book to anyone who's looking to get a little more context in the environment we're in with all these AI tools being introduced.