Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

Episode 273 - Stop Making AI Boring

Episode 273 - Stop Making AI Boring

Max grumbles about the corporate and dry nature of MLConf and the talks from top companies, even as the world of AI and ML explodes in controversy, breakthroughs and new applications.
- Should AI research be Paused for 6 months?
- Are "X for Good" talks always evil?
- Are engineers afraid to share any perspective slightly outside the mainstream?

Links

MLconf: MLCONF NYC 2023 AGENDA
YouTube: Censorship Run Amok: Covid, The Lab Leak, Masks & The Twitter Files
David Friedman’s Substack: The Lab Leak Theory
Future of Life: Pause Giant AI Experiments: An Open Letter
YouTube: Yann LeCun and Andrew Ng: Why the 6-month AI Pause is a Bad Idea

Twitter: Chamath Palihapitiya: “If you invent a novel drug, you need the government to vet and approve it (FDA) before you can commercialize it.
If you invent a new mode of air travel, you need the government to vet and approve it (FAA) before you can commercialize it.
If you create a new security, you need the government to vet and approve it (SEC) before you can commercialize it.
More generally, when you create things with broad societal impact (positive and negative) the government creates a layer of review and approval.
AI will need such an oversight body. The FDA approval process seems the most credible and adaptable into a framework to understand how a model behaves AND its counterfactual.
Our political leaders need to get in front of this sooner rather than later and create some oversight before the eventual/big/avoidable mistakes happen and genies are let out of the bottle.


Twitter: Pedro Domingos: “We need to regulate word processors to ensure they’re never used for nefarious purposes.”

Related Episodes

Episode 9 - Fixing Facebook and Lindy's Law
Episode 56 - True News, Fake Faces, and Adversarial Algorithms
Episode 266 - Simplicity, Complexity, and Text Classification with Joel Grus

Transcript

Max Sklar: You're listening to the Local Maximum episode 273.

Narration: Time to expand your perspective. Welcome to the Local Maximum. Now here's your host, Max Sklar.

Max Sklar: Welcome everyone, welcome! You have reached another Local Maximum. 

Well, I haven't done a live recording in a while. This is my first recording from my apartment here in Stamford, Connecticut so I thought I'd give an update on the last few weeks. Of course, we're going to do a fuller update and an interview with Aaron soon. 

There's the move going on in my life. There's the new job. I've been celebrating Passover. I've been doing the Seders last week, doing the whole no bread thing this week. Very interesting to give yourself dietary restrictions rather than just usually what I do which is just eat everything in front of me which is probably not a good thing. And my birthday is tomorrow!

Now my takes today are going to be, maybe a little controversial. I’m taking a little bit of a cranky take today. I'm not even in cranky mood because the weather is so great out there. But I want to talk about this. 

Right after my move on March 30th. I was very excited to go into New York City and attend a, finally, a live, in-person conference in New York City. I hadn't done that since well before COVID. This was called ML Conf, a Machine Learning Conference. It had talks from engineers at companies like Google, Lyft, Foursquare, and Pinterest. I'll post the agenda here. I was also at 235th, which for those of you who are not in New York, used to be one of my favorite rooftop bars, 235th. 

I remember the wireless generation when I was working wireless generation or after I was working wireless generation, they had one of their holiday parties up there at the end of the year, that was all the way in 2010 or something. And then I also went to a crypto event there in 2019, Crypto Stars. Before 2010, I think I just went there a lot, probably a lot, like three or four times. When I look at Foursquare, I haven't been there that much. But it's a pretty cool rooftop bar, which is why I'm so upset they made such an uncool conference. 

I know what I'm supposed to say. I'm supposed to say how awesome it was and give everyone a pat on the back, the whole machine learning community, the whole AI community. But sometimes someone's got to point out we can do better. I'm going to be a bit of a film critic here and be brutally honest. What a freaking waste of time that was. Waste of money too! It cost me $400 to go down there. 

Very few people in tech and data science know how to give a halfway decent talk. And it pissed me off! It really does. So first, if we're going to revive in-person conferences for technology, we're going to have to shape up. Hopefully, today's episode is going to be a better version of the conference. 

Unfortunately, left to their own devices, many speakers in tech, they take a fascinating topic and make it boring and dry. You're supposed to do the opposite but even worse, I think that COVID and the god awful cultural trends in cities where we reside have exacerbated this problem. If you go, if you just knew nothing about AI, if you knew nothing about machine learning, if you just went to this event and said, okay, check it out, see what's going on, this does not look like a thriving industry. Which should be what AI is! It's the most exciting time in AI. 

Everyone is talking about it. Everyone's typing to Chat GPT and posting their results. There are so many controversies about it, which we'll get to in a minute. People are like, ‘Oh my god, this is moving too fast. We're gonna lose our jobs. It's going to take over the world. We better stop.’ Everything is happening with these large language models, mid-journey, all of the images, but it seemed like if you go there, it's just a sad, sorry whatever.

I've noticed this for a long time. That's one of the reasons I got into podcasting. I was like, ‘Hey, maybe it's gonna be hard to be the most interesting podcaster in the world. But if I could do better than all of the tech speakers out there to put together a show, I think that's a lot easier to do.’

So as I said, most of the talks were incredibly boring with speakers just wanting to list off all of the details of their company and the infrastructure, just bullet point by bullet point without any big picture. And that's hours of most of the attendees. I tried to pay attention but if you look around, most of the attendees are just sitting there on their phones. Their companies, probably most people didn't pay outright like I did. But I shouldn't be proud of that. I got fleeced there. Their companies paid. The companies were probably like, yeah, sure whatever. Most attendees are sitting there on their phones the whole time for that whole day. What's the point? There was nothing creative or interesting there, even though, again, as I said, it's just an exciting time to be in AI. 

Going back to the fact that take a fascinating topic and make it boring. I'm reminded that I saw a talk on functional programming many years ago, over a decade ago. I was like, ‘Oh, my God, I love functional programming so much. I hate this talk.’ Things like that happen all the time. I've been to some exciting talks. I've been to some great talks in software, and in AI, and in machine learning. But a lot of these were, the odds were not great here. 

They think that 30 minutes where they list all the technical specs and details of your project is what the audience wants. Companies pay to send their engineers here with a 30-minute speaking slot. I don't understand why all these companies, they want to send their engineers to 30-minute speaking slots just to make the engineers feel important?There was no attempt made whatsoever for storytelling or salesmanship in their presentation. I understand an engineer is not going to be at the height of that. But some attempt is all I ask. Also, they don't treat their lecture like a product. I don't know what their goal is. It's just, ‘yeah, then I did this then I did that.’

I found that when I was in New York, the meetups are generally better. The meetups that I went to. I went to the machine learning meetup, the Python meetup. Even the statistical meetup, Math For Math’s Sake meetup. 

Those are generally better, first of all, because those are people who are, those who attended, the companies didn't send them there most of the time. So it's people who are really interested in their subject. There are people who really want to entertain the audience and they're like, well, everyone's coming out for me tonight. It’s usually a smaller group of people. Even if it's like a large group of people, it’s like ‘hey let's slam your Python project.’ People are trying to sell the Python project. It's a hack day, people want to get people interested so people are really trying to sell it. So meetups are generally better. Maybe we have to do those again because almost all of them have gone offline. 

Maybe a lot of the speakers were inexperienced. Look, I understand. Even if I complain that the speakers are boring, I don't hold them responsible individually. I hold responsible the company sending them there and the group that was putting together the conference, which has been doing this for 10 years, by the way. So either they never learned in 10 years, or they just kind of threw in the towel. Probably the latter. But participants didn't seem to care, they were just like sure, great, whatever. 

Now, this is not because I'm just salty that ML Conf rejected my talk on bias correction, which I covered here on the podcast in Episode 218, so who cares? Hey, if you want to learn about bias correction, just listen to the Local Maximum 218, you don't have to attend the conference. I thought it would have been nice to go in there. They couldn't check into my background. Obviously, I just sent something and they're like, well, he's not with a company. I think that they're just looking for the big names to sell tickets and I no longer have the Foursquare name behind me so that kind of sucks. I wish having the Local Maximum behind me had the same kind of oomph in this crowd. But maybe not, maybe it carries sway in other crowds. 

But the conference doesn't have to be like this. This is in contrast to Norm Conf, which I went to back in December. It was virtual. Speakers were vetted and each one had something important to say about the practice of data science. So that's the difference between corporate products in terms of doing these conferences versus an independent product, which Norm Conf was. 

Vicki Boykis put on norm conf, basically with a few other people. So it was like her reputation. I mean, not if she got together her people and it was bad and her reputation was tarnished. It was like, ‘Hey, I'm organizing this so you kind of wanted to do a bad job.’ And I reached out to Vicki and I complained about ML Conference. She said, ‘Yeah, this seems to be par for the course.’

I usually go to conferences to mingle with people, happy that's a possibility again. Hopefully, we'll get better conferences. Maybe it's a learning experience to me. These corporate ones, don't go there. Independent ones, great.

By the way independent conferences, Norm Conf, one of the speakers there was Joel Grus who I had in this podcast in Episode 266. His talk at Norm Conf was something like, what's the simplest thing you could do and why didn't you do it? That is a talk that gets people's attention. You learn a lot in that talk. I learned a lot about NLP, what the best practices are. So it was a nice talk, as were a lot of them at Norm Conf.

It's unclear what the organizers of ML Conf were trying to achieve. If it's the networking then I just think we're better off organizing a data science happy hour or something. I'm going to a Data Science Day at Colombia later this month, maybe a little better. 

So how are we going to change this? In the future, now that I'm in the New York area, maybe I'll go to more meetups and conferences and look for events with vetted speakers and keynotes. So again, Columbia Data Science Day has some speakers from universities. They're usually better speakers because they're used to speaking in front of a classroom, so maybe that will be better. Maybe I can hold an event for the podcast in New York soon. I know a lot of you listeners are in New York and hopefully, we'll get to do that. 

Few things stood out to me. Sarang Aravamuthan, I hope I pronounced that right, he talked about class imbalance problems in NLP. That was similar to the bias correction stuff I talked about earlier since I dealt with that and wrote that paper on it, I handed him a copy. Foursquare was there, the venue's team which I used to be on in 2014. Most of them were not there so it's nice to talk about old projects with them. 

But the most interesting talk, I think, was by a woman named Matar Hallar. She did an excellent presentation on Active Fence content moderation. The talk was titled AI For Good. That's always a red flag for me, I am certain their tech could be used for evil. And I wanted to ask a spicy question but they went overtime and they didn't permit questions on only this one. It was like, this is the one where there would be a real, we could really get into it, a real spicy back and forth. This was the one where I was ready to ask a question and they were like, ‘No questions on this one, let's move on.’ And the other ones were like ‘Any questions?’ and it would just be silence, everyone on their phone. 

Anyway, her main point was that content moderation is adversarial. That means that if you try to build a model to filter out certain content that people create on user-generated sites or user-generated content, then people are going to generate new content that gets around that model.

We've talked about adversarial models in the past. In terms of sometimes, you could use the adversarial nature of… You can use… There's something called an adversarial algorithm where you actually have two algorithms in an arms race. In Episode 56, we talked about GANs that generated adversarial models. The ones that were, at the time, creating fake faces and now we have all sorts of fake images all over the place where you have one algorithm trying to figure out okay, was this done by a machine or is this a natural image? And another one generating them, and you have them go against each other? 

The fact that you can have a machine be the adversary is you get that kind of feedback loop and that's why we've seen such amazing progress in that tech. But in the context of content moderation, do we have machines that try to get against the try to go against content moderation? There are probably people who are trying to build them on the back end. 

I talked about this problem, though, five years ago, way back in episode nine when it comes to Facebook. Episode nine was called Fixing Facebook and Wendy's law. I talked about, I argued that the claim from Mark Zuckerberg that he was solving moderation, content moderation, in five to 10 years with AI is flawed because it's adversarial. There are so many ways that people will get around your moderator. It's been five years, it looks like Facebook has tried to solve it by basically blowing up their community and all norms of free speech. Up to a real follow-up on that, was Mark Zuckerberg’s promises of what he would do in five to 10 years, did that actually come to fruition? I mean, it's only been five years so maybe we'll give him a little bit more time but we'll see what the next five bring. 

The speaker Hallar, she was a very good speaker, first of all. She had some really funny examples. And also she said, some of these, trigger warning at the beginning. Some of these images and texts are upsetting and so that gets people to pay attention. So she talked about a number of different issues that come up. It's all about context. Whether it's code words for a kind of neo-Nazi group where code words are also used in regular speech. You don't want to censor things you shouldn't be censoring. Same with images. There could be violent images but you don't want to censor journalists who are covering a war zone. That kind of thing. 

So very great talk, very good job. It all sounds well and good but I think I now have this rule for these kinds of talks. And there's always a talk on using x for good and it's always the most evil one, every single time. In this case, I tried to get the speaker on the show but she declined. She was nice enough not to ignore me, she wrote a nice note saying she can't do it. Actually, she didn't write the note. She had like a secretary or someone writing a note for her so it was someone else that I got it from. That was really interesting. 

But I'm gonna still make my case, even though nobody from there is coming on the show, is that what they are building will be used by authoritarian regimes and evil, authoritarian elements in the current Western society to suppress free speech. 100%, that's what they're building. 

Turns out these platforms, they don't even care too much about Nazis. They care about protecting the narrative of the powerful and that's going to ingratiate themselves with what is today's version of high society. In this case, I will defer to a recent report by John Stossel called Censorship Run Amok. Very well done. It's a four-minute video on YouTube, I highly recommend it. It lays out very clearly and in a very short period of time, how tech companies and technology have been used to stifle free speech. Most importantly, they've been using technology to filter out facts that are true. 

Now look, an open society is not all about, people are allowed to put out facts that have to be vetted. That's the whole point. You put out hypotheses, you say things and then we vet them. But the fact that you're putting out things that turned out to be true and filtering them out and that happens routinely, that means there's a huge problem. So he posted his examples where like how the lab leak theory of the origin COVID was ridiculed. 

By the way, David D. Friedman has an interesting thing on substack about that now which a Bayesian analysis lab leak theory, which I haven't read yet. That was one of them. He said they wouldn't report on the truth on masks or vaccines for that matter. And not just wouldn't report on but if someone said something, they would just get banned. 

They gave misinformation about the Hunter Biden laptop. I really think that when Twitter bans the New York Post, that was a big turning point. Which direction, I don't know, but that was a big turning point in our society. Even a big newspaper that was founded in 1802, or whenever it was founded but somewhere around there. New stories that it puts out is going to get banned by Facebook and Twitter. And he said something like, ‘Well Facebook says they didn't ban it, but they were sneakier. They just suppressed it.’ Then that story, of course, turned out to be true. It wasn't like, ‘Well, the New York Post made a mistake. So we've got to ban them even there.’ I don't think the New York Post should have been banned but the New York Post is banned and what they were saying was true. 

So these infamous examples aren't outliers. This seems to be the whole game now. I'm sure you’d come up with pages and pages of it. Something like this comes out every week. 

Everybody working on content moderation, just thinks it's okay to ignore this completely. Maybe because they're paid to ignore it. Maybe if they say something about this at these companies like Active Fencer or any of these other companies that are actually doing content moderation, the leadership doesn't want to hear it. Except maybe Elon Musk. He's been criticized for doing censorship in a different way but I think he has been open to criticism. So that's been really interesting. We'll see that. 

To quote Stossel, ‘Don't let anyone say we will be the gatekeepers. We know what's true, they don’t.’ 

My intonation wasn't quite right there. What he said was, ‘Don't let anyone say we will be the gatekeepers, we know what's true. Because they don't. They don't know what's true.’

So all right, that was the best talk of the night, of the day, and it was totally evil. So that's funny. What else what else was going on at the time? This is March 30th. 

There was this six-month pause proposed on AI. Which should have been talked about at the conference. You’d think a conference on machine learning would talk about all of this controversy in the news. What is this? So you go on future.life.org and see this petition. The key phrase is ‘Therefore we call all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT Four.’ There's a lot of risks that are cited. The big names on this are Elon Musk, of course, again Wozniak, Steve Wozniak, Andrew Yang, Yoshua Bengio, he’s a well-known researcher. 

This is really interesting. I don't agree with it and a lot of researchers agree with me that it's kind of ridiculous. They cite… You can go on there and cite, look at all the risks that they cite. None of them seem particularly compelling to me, although some people tell me to take another look at that. We'll see.

Machine learning researchers Yan Laocoon and Andrew Ng. Andrew Ng, not Andrew Yang. Notice that Andrew Yang is the politician who signed the petition saying, grant a six-month pause. Andrew Ng is a machine learning researcher professor at Stanford, who has an open course on machine learning. He's also opposed to this idea along with Yan Laocoon. 

A couple of reasons. One, it's going to make research go secret, which is a bad idea. It's also shooting ourselves in the foot, while countries like China, I don't know why I have to say it like that but I feel like everyone has to say it like that now, keeps working on this. Of course, other countries as well. You're shooting your own research team in the foot. And notice all the people who are calling for this six-month pause, they're not pausing their research. They're kind of waiting for this to get passed or something. Like everyone in the world is gonna agree to this? I don't think so. 

But there's no reason, I think, for us to suspect that we'd know anything in six months that we don't already know. Like what is this? Pause for six months so we can figure out what the hell is going on. What are we going to figure out? Now the people who are saying don't support this pause, they're saying that the risk of AI systems is when you're using them to make decisions. So it's the decisions and actions that can be regulated, not the mathematical model. 

Yann Lecun says we're smart enough to create the system. And he also says ‘If we're smart enough to create the system, we're smart enough to design good objective functions and keep iterating on that.’ Also, to me, it sounds a lot like six months to flatten the curve, a moral panic that just springs up these days. 

A few comments on this from Chamat, who is a… I believe he's a well-known Silicon Valley investor. Let me pull that up. CEO Social Capital, he's been on a lot of things. He tweets. 

‘Another thing that came up.’ This is his tweet, this is not me. ‘If you invent a novel drug, you need the government to vet it and approve it, the FDA, before you can commercialize it. If you invent a new mode of air travel, you need the government to vet and approve it, FAA, before you can commercialize it. If you create a new security, you need the government to vet and approve it, the SEC, before you can commercialize it. 

More generally, when you create things with broad societal impact, positive and negative, the government creates a layer of review and approval. AI will need such an oversight body. The FDA approval process seems to be the most credible and adaptive into a framework to understand how a model behaves and its counterfactual. Our political leaders need to get in front of this sooner rather than later and create some oversight before the eventual big, avoidable mistakes happen, and the genies out of the bottle.’ 

This is the most incredibly bizarre tweet that I can think of that's proving the opposite. I would like someone to look into how these other things from both sides have worked out. How's the FDA worked out? How many people have the FDA killed? What about the SEC? The SEC just destroyed library the other day. It's creating rules as it goes. It’s probably has been responsible for more financial loss than it's fixed. It's hard to quantify that but it seems all of these. The FAA, I've gotten nothing. I feel like I'm not really attuned to what's going on at the FAA. 

But look, I feel like, ‘Oh, yeah, there are all these agencies that are really good and fix things.’ It's like, I hear about the evils of all these agencies all the time! And now you want an agency with AI that’s going to be stuffed with people like all these people at Google who want AI to police speech. They're gonna look at AI bias, but they're gonna use it to steer it into their bias. 

I think this is an incredibly, it's just an incredibly obnoxious idea but I guess this is the way things go. The powerful people with silly ideas get control over the world, hopefully not so much.

Pedro Domingo says, who's another machine learning researcher I follow, I like. He wrote, Master Algorithm. He, tongue in cheek says, ‘We need to regulate word processors to ensure they're never used for nefarious purposes.’ Meaning that content creation tools, like word processors or calculators, which is, an AI is a more complex version of that, are not where the problem lies. Lots can be said about that. 

I'm sure we'll have many debates on that here as well if you want. I feel like there's so many angles on which to debate this so if you have any ideas on how we can debate it like what topic to pick up specifically with me and Aaron, or who I can have on the show, let me know at localmaxradio@gmail.com or join our locals@maximum.locals.com. 

Now look, this six-month pause proposal. This was going on right as we were having ML Conf. You’d think people would want to talk about it and people did want to talk about it. This was all the big news of the conference. So the Google engineer gets up, they give a whole presentation then they go to the Q&A. First question, ‘What do you think about this six-month pause?’ What do you think about this thing that AI researchers are talking about to the AI researcher at Google? And they literally answered, ‘I have no comments about that.’

This is the one interesting topic. Well, two interesting topics because of the last one with content moderation. But this is the one interesting topic that you could talk about. The problem is, again, I don't blame the individual. The problem is that everyone is afraid to give their opinion, even high-level people, even the people who we need desperately to hear opinions from. 

I think the questioner just wanted to get a perspective. I don't think the questioner was trying to pin this person down and trying to get them to say something on the record that's going to come back to them later. But people are afraid. Perspectives are risky these days. Very scary trend for the industry, and for the country, honestly. Hopefully, we'll try to fight against that here in the Local Maximum. I don't know what we could do other than powering through it. If you’ve got any more ideas, let me know. 

So at the end of the conference, we get to the reception. Now finally, for some good hors d'oeuvres. They had many sliders. Good, but not great. The lunch was pretty mediocre. They're kind of sad turkey sandwiches and honestly, for people who were vegetarian, they kind of forgot the vegetarian options. The lunch was very mediocre. But the hors d'oeuvres at the end, they were okay. Some sliders, some drinks, whatever you're paying. When you're paying hundreds of dollars though, I don't know. 

I started talking to some folks about the conference. And then we talked about the technology. We were talking about AI. We were talking about the products we're building. Some people had recognized me from years ago. I talked to Foursquare people. So we're starting to get okay, I was like right, I liked this networking. It's pretty good. 

Then it's just as it started getting good, the news broke of the Trump indictment and that's all everyone wants to talk about. They were all saying things like, ‘Yeah, finally got him. Thank God, we have an adult like Joe Biden in the White House.’ And I was like, ‘Okay, time to go. Time to get out of this crappy conference.’ 

I watched that indictment on TV a few days later and no one in New York can see this is obviously a ploy to get rid of a political opponent which was the exact same thing Trump was impeached four years ago. In that case it's questionable whether that's really what he was trying to do. In this case, they're actually doing what they likely falsely accused Trump of doing in 2018. So whatever. They may get away with it. It's sad. All New Yorkers are clueless. So how is this gonna turn out in the long run? I don't know. I predict it will go in an unpredictable way in the next few years. 

I look forward to a time in six years when a new generation is in politics because this generation is getting way too old and looks like the life expectancy, unfortunately, has been going down rather than up. I'd rather have life expectancy go up and then have this generation be there for another 10 years which would suck but better than people dying. There's been a bunch of things about anti-aging recently that I wanted to get to. 

Anyway, what am I trying to say? Our leaders are old. They're really freaking old. In five years, the election of 2028 is not going to have this generation. Things are going in an unpredictable way. Who knows what our politics will look like? 

Wow, I moved to… so I kind of just slinked away from that conference and I was like, yeah, let's get out of here. So I moved to this area just in time, didn't I? 

Alright, lots more going on this week. No probability distribution of the week this week but I hope to start that again really soon. There are a lot of shifting alliances on the world stage. The US dollar is kind of losing its status as the world reserve. Looks like countries that are historically opponents like Saudi Arabia and Iran, Pakistan and India are going to China's orbits. I don't have time to cover this in-depth today. I don't really understand it as much, but I'm thinking about it. It's nuts. Hopefully, I can get Aaron on the show soon and we'll get a true news update. Have a great week, everyone.

Narrator: That's the show. To support the Local Maximum, sign up for exclusive content and our online community at maximum.locals.com. The Local Maximum is available wherever podcasts are found. If you want to keep up, remember to subscribe on your podcast app. Also, check out the website with show notes and additional materials at localmaxradio.com. If you want to contact me, the host, send an email to localmaxradio@gmail.com. Have a great week.

Episode 274 - Quantum Computing with Ian MacCormack

Episode 274 - Quantum Computing with Ian MacCormack

Episode 272 - Data Science History with Chris Wiggins and Matthew Jones

Episode 272 - Data Science History with Chris Wiggins and Matthew Jones