Book Review: Rebooting AI: Building Artificial Intelligence We Can Trust
- Kevin D

- 5 minutes ago
- 4 min read
This week's review is on Rebooting AI: Building Artificial Intelligence We Can Trust by Gary Marcus and Ernest Davis.
In the nearly six years since its publication, Gary Marcus has remained a staunch contrarian even as Generative AI has exploded in financing, development, and implementation across the tech, business, and education worlds. Rebooting AI marks his first real entry into a critique of where the field was heading and the limitations imposed by those choices. This approach has helpfully been broadened by his substack - a recommended follow of mine!

Rebooting AI unfortunately falls into the category of books that are really an extended blog post. The level of depth isn't enough to satisfy technical or expert readers; but instead the length serves to add myriad examples and fluff; rather than deepen an argument or steelman a counter. Marcus and Davis proceed in a logical order through 8 chapters which identify the problem, define deep learning, critique the specifics of those AI approaches, and offer advice on a deeper path to understanding.
While defining the problem - the limited capabilities of AI - the duo rightly critique the AI hype train and identify what three separate challenges. The first of these is the "gullibility gap" - we have difficulty as humans distinguishing between humans and machines (18). Second, there is a gap in mistaking "progress in AI on easy problems for progress on hard problems" (20). Lastly, many fixes in AI are quick - lacking in true robustness. These three contribute to a false perception of what AI can and can't do and are just as true in 2025 as in 2019, when the book was published.
Marcus and Davis then proceed to not just identify "what AI can't do now - and why it matters - but it's also about what we might do to improve a field that is still struggling" (25). Chapter 2 identifies where AI was at in 2019 and some of the dangers trusting it led to - citing numerous examples, including the ones of bias and "proxies" in Weapons of Math Destruction. Chapter 3 then takes a step back and looks at what Deep Learning is and how it works. The crux identifies that depth refers to the layers of a neural network not "that the system has learned anything particularly conceptually rich about the data that it has seen" (62). An update on this would be helpful - though I'm sure the authors remain skeptical at the most salient point.
Chapter 4 continues with a focus on "reading" - highlighting the limits of assistants and AI at the time with text and context. Of course, in the half decade since its publication, there have been advances in this area in particular. We may not be at the assistant level of Her (highlighted in Rebooting AI) but we are certainly closer than we were with Alexa and Siri 1.0 back then. I'm not sure a revision of this chapter or even a wholesale cutting would impact the basic points of the book; but it certainly seems dated in the era of ChatGPT.
Chapter 5 is an examination of AI in realistic contents - essentially robotics. A revision of this chapter highlighting what remains an issue for Waymo and Tesla but has been conquered by a combination of big data and LIDAR. These pages highlight difficulties for robots beyond what endless CAPTCHA data might do. Examples are replete - but again - not necessarily helpful for the reader to check.
These chapters of limitations then lead into the final section of the text - proposing next steps in a general way. Chapter 6 addresses the issue of "common sense" - the unstated assumptions that humans have almost intuitively that logic-driven machines do not. They propose an entry-level approach of eleven clues from cognitive science, drawing on Marcus' expertise as a first path forward. These are helpful in a broad sense but suffer from specifics and technical expertise; or even philosophical expertise.
The final chapter continues a discussion of common sense before proposing a summary - which is definitely the TLDR of the book:
Start by developing systems that can represent the core frameworks of human knowledge: time, space, causality, basic knowledge of physical objects and their interactions, basic knowledge of humans and their interactions. Embed these in an architecture that can be freely extended to every kind of knowledge, keeping always in mind the central tents of abstraction, compositionality, and tracking of individuals. Develop powerful reasoning techniques that can deal with knowledge that is complex, uncertain, and incomplete and that can freely work both top-down and bottom-up. Connect these to perception, manipulation, and language. Use these to build rich cognitive models of the world. Then finally the keystone: construct a kind of human-inspired learning system that uses all the knowledge and cognitive abilities that the AI has; that incorporates what it learns into its prior knowledge and that, like a child, voraciously learns from every possible source of information: interacting with the world, interacting with people, reading, watching videos, even being explicitly taught. Put all that together, and that's how you get to deep understanding.
As an ironic coda, they add: "It's a tall order, but it's what has to be done." (179). Understatement of the year, indeed. This prescription leads to a final chapter on what would be required for a trustworthy AI system and a brief epilogue. Neither of which has the philosophical and ethical depth to really engage beyond the idea of trusting the capabilities of AI - not the intentions or alignment.
Rebooting AI: Building Artificial Intelligence We Can Trust by Gary Marcus and Ernest Davis.
Rating: 3/5 Stars
Good For: An introduction to the problems with Generative AI in 2019.
Best nugget: Good identification of weaknesses inherent in deep learning systems.
Please note: As an Amazon Associate I earn from qualifying purchases. However, I am not paid to provide reviews or use content.





Comments