Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

Episode 117 - A.I. Hype and the Expert Problem with Christian Hubbs

Episode 117 - A.I. Hype and the Expert Problem with Christian Hubbs

Today’s Local Maximum features Chistian Hubbs, host of the Artificially Intelligent podcast. We talk about the most recent examples of AI Hype, more on the Coronavirus models, and the nuanced problems of listening to the experts.

About Christian Hubbs

 
2_CAyPITjC4hkfbjSnyW7qnQ+%281%29.jpg
 

Christian Hubbs is the host of the Artificially Intelligent Podcast, and is currently a PhD student at Carnegie Mellon University in Pittsburgh, PA working on data science, AI, and engineering.

Artificially Intelligent Podcast
Datahubbs.com: Your Hub for Data Science
Article on Towards Data Science: The Pandemic of Uncertainty

Twitter
LinkedIn

How to Cut Through the AI Hype Amid the COVID-19 Pandemic with Christian Hubbs

As long as COVID-19 persists, people will try to find all possible means to fight the pandemic. However, this urgency doesn't give people the freedom to abandon rational thinking for recklessness. While AI holds a lot of potential to solve human problems, the current hype provides little substance toward improving healthcare response to the pandemic.

How can you determine if an AI application has critical flaws? Which experts should you trust?

Getting a better understanding of AI is the key to answering these concerns. In this episode, Christian Hubbs debunks many of the myths that people hold about AI. He also teaches how to spot bad AI work and how to assess expert credibility.

If you want to learn the truth about AI expertise in the context of COVID-19, tune in to this episode now!

  1. Why does the AI hype on COVID-19 detection require scrutiny and critical thinking?

  2. Discover the common pitfalls plaguing AI models.

  3. Learn how to pick the most reliable research from the best experts.

Links

Genetic Algorithm Article in ScienceMag.org: Artificial Intelligence Evolving all by Itself
Original Paper from Google on Genetic Algorithm ML
Towards Data Science: Beware the AI Hype on Detecting Covid-19
Reddit: Why the AI Hype is Absolutely Bonkers
For more on serious Genetic Algorithm research: Yoshua Benjio

Financial Analysts mentioned: Jim Bianco Chris Martenson

New York Times: Reporting from March on the Neil Ferguson model predicting 2.2 million deaths

Max’s appearances on Artificially Intelligent:
https://www.artificiallyintelligent.tech/101-covid-19-location-data-and-more-with-max-sklar/
https://www.artificiallyintelligent.tech/episode-54-the-local-maximum/


Episode Highlights

Christian's Beginnings

  • His interest in AI started when he discovered several podcasts on data science and AI.

  • He got inspired to talk more about AI once people became more interested and the field grew in popularity.

  • With a desire to establish himself, he set out to put content out for his people.

  • He works on developing deep reinforcement learning systems to help make decisions within the chemical industry.

Public Perception of AI & Evolution

  • People are awed and intimidated by the power of machine learning, as they associate it with computers “evolving.”

  • Many AI systems use genetic algorithms, which repeatedly try out various functions, assess their performance, and recombine the most successful ones.

  • While genetic algorithms are powerful, they aren't exactly groundbreaking. People don't need to be afraid of understanding them.

Taming the AI Hype

  • Currently, some firms use automated machine learning, a self-learning approach automating much of the work required in AI.

  • However, these systems still don't have full autonomy. They require programmers to give them objectives and other parameters.

  • AI has yet to evolve by themselves and take over the world.

AI in the Time of COVID-19

  • Many start-ups are pledging to develop AI-powered applications for contact tracing.

  • However, most won't probably make a significant impact. Pessimistic reports say the pandemic will last for five years, but it takes even more time to develop a contact tracing app properly.

  • Another popular AI concept was to use image processing to determine the presence of viral infections from x-ray images.

  • It turns out the training data didn't include negative controls. The resulting model and the people who made the software were discredited.

Common AI Mistakes

  • A small bug, such as dropping a minus sign, can make AI models learn erratically. They may detect no loss during training even if the model is way off.

  • Bad data engineering can lead to less rigorous datasets, which can make models appear more accurate than in reality.

  • During learning, some AI algorithms may try to find shortcuts to bypass substantial penalties.

  • This phenomenon, called reward hacking, leads to lower apparent inaccuracy but can cause models to fail to analyze new test data correctly.

  • In these scenarios, it might be better to relinquish some control over the learning process to human researchers capable of intuitive thinking.

Expert Opinions

  • Just because you're not an expert doesn't mean your analysis and conclusions are invalid.

  • Non-experts can view problems with a fresh perspective, free from the typical assumptions experts typically take for granted.

  • More than ever, we need intelligent people from different fields to look into our current problems.

  • However, don't always assume experts are right. Many factors, such as assumptions and external events, can influence the accuracy of conclusions.

5 Powerful Quotes from This Episode

“When you combine artificial intelligence and evolution together, people just freak out.”

“One of the things I think people don't understand is that these systems, these AI systems, these machine learning systems, aren't going to just take off and teach themselves. That's not how this works. You know you have to give them an objective.”

“What I've seen is that people push back and say, like, ‘Well, you aren't an expert, so you shouldn't comment. You shouldn't be looking into this.’ And I think that that goes completely in the face of scientific progress.”

“Bringing in a lot of people who are intelligent from other fields, I think, is exactly what needs to be done.”

“It's crazy because there's so much uncertainty around what's happening, and it's really hard to separate the signal from the noise.”

Related Episodes

Episode 115 on the IHME Coronavirus Models
Episode 114 on the Market for Credibility
Episode 97 with David Kopec and a discussion on Genetic Algorithms
Episode 24 with Christian Hubb’s previous Local Maximum appearance
Episode 18 on MIT’s hyped up Psychopathic AI


Episode 118 - Learning vs Memorization, Bitcoin Halvening, and Antibody Tests

Episode 118 - Learning vs Memorization, Bitcoin Halvening, and Antibody Tests

Episode 116 - Cytometry and Machine Learning with Hefin Rhys

Episode 116 - Cytometry and Machine Learning with Hefin Rhys