Erik Larson to speak at COSM 2021, unraveling AI myths
If you’ve ever had the feeling that we are all play by people marketing “Soon AI will think like you or me!”, you might want to catch Erik J. Larsontalk to COSM 2021 (November 10-12). Larson, author of The myth of artificial intelligence: Why Computers Can’t Think Like We Do (Harvard University Press, 2021), is a computer scientist and technology entrepreneur. Founder of two DARPA-funded AI startups, we’re told he’s currently working on fundamental issues in natural language processing and machine learning. He wrote for Atlantic and for professional journals and has tested the technical limits of artificial intelligence through its work with the IC2 technology incubator at the University of Texas at Austin.
But now here’s the fun part: Larson has found a method of deciding who is influential, especially in academia. Its algorithm, applied to Wikipedia, had to be carefully constructed. Influence depends on more subtle metrics than, say, a post going viral and garnering a million views. Affecting?
… A two-headed kitten can attract a lot of attention without having any influence. As Larson says, “The problem is, you can have some extremely influential people who are not, on the whole, popular in modern or broad media terms, right? Like you can have someone who’s an expert in string theory or something, but they’re not sort of… They don’t have a huge following on Instagram. [08:52.00 EL]. No, but they can dominate a field of science that conveys basic ideas about our universe to the public. Sometimes such people, Stephen Hawking for example, are well known. Often they are not. Mind matters (April 30, 2021)
Finding out who they are would help us better interpret cultural changes.
Some critical articles from The myth of artificial intelligence: Why Computers Can’t Think Like We Do give us an overview of Larson’s book:
A useful place to start is first to understand AI issues as they present themselves today. Computational intelligence tends to be very “narrow” in scope, and that’s by design: an AI that plays chess, due to its high degree of specialization, cannot also play checkers. An extreme case of this is what the author calls the “brittleness” problem: not only can a narrow AI not perform other tasks, but even slight deviations in the configuration – which would not even be recorded for it. man – completely spoil the output of the computer. Consider an AI who can play the Breakout game perfectly, which requires moving a paddle back and forth to bounce a ball towards the bricks. Moving the palette a few pixels closer to the bricks wouldn’t dramatically affect the performance of a human player, but doing the same for an AI and its “whole system crumbles”. The same is true with image detection software: they usually have a very high success rate, but just changing a few pixels here and there completely messes up the system.
Hassan uz-Zaman, “Can computers think like humans? Review of “The Myth of Artificial Intelligence” by Erik Larson” To Average (June 13, 2021)
Larson’s central chapters deal with a problem that can be illustrated by an example from a 1979 article by philosopher John Haugeland: Jones, entering, says, “I left my raincoat in the tub because it was wet.” . Smith effortlessly understands that Jones said his raincoat was wet, not that the tub was, although his statement of “that” might have grammatically one or the other referent. How does Smith do this? And how could a computer do it, as it should if it wants to engage in normal conversation? Deductive logic does not seem to be the tool for this job.
Although computers are extremely good at applying deductive rules, these rules can only generate lines of reasoning that are as tight as mathematical proofs. It’s not what it takes here: Jones was probably talking about the wetness of his raincoat, but there’s no deductive guarantee. The problem is also not solved by finding patterns in large datasets. Computers are also good at this, but the statistics can point in the wrong direction: The humidity in bathtubs may have been mentioned more frequently than the humidity in raincoats.
Christophe molé, “Famous wet raincoat” To Times Literary Supplement (June 25, 2021)
If you want to hear and talk to Larson at COSM 2021, you can save hundreds of dollars by registration before October 1, 2021.
To note: Peter Thiel, Carver Mead, James Tour and Babak Parviz will also all be there in person, along with other techies who are shaking things up.
You can also read:
How Erik Larson found a method to decide who is influential. The author of The myth of artificial intelligence decided to apply an algorithm to Wikipedia, but it had to be very specific. Many influencer metrics depend on rough metrics like the number of page visits. Larson realized that the influence is more subtle than that.
A new book massively demystifies our “AI overlords”: it won’t happen. AI researcher and tech entrepreneur Eric J. Larson expertly dissects apocalyptic AI scenarios. Many thinkers have tried to stem the tide of the hype, but as one information theorist points out, no one has done it so well.