Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

Episode 170 - Bayesian Inference Regulations and Network Algorithms

Episode 170 - Bayesian Inference Regulations and Network Algorithms

Today’s episode discusses the proposed regulations of AI from the European Union, including regulations on Bayesian Inference. This includes the danger and folly of regulating mathematics, but also good practices for decision making and transparency. Finally, we cover graph algorithms, PageRank, and Split Complex Numbers for online dating recommendations.

Links

StatModeling Columbia: EU proposing to regulate the use of Bayesian Estimation
Brookings: The EU path towards regulation on AI
EU: Proposal for a Regulation

Jerome Kunegis: Online Dating Recommender Systems, The Split Complex Number Approach
Sergey Brin and Larry Page: Anatomy of google (PageRank)

Relevant Episodes

Episode 167 on Wordproof and Using Timestamping
Episode 105 on Bayes Rule with mathematician Sophie Carr
Episode 78 on Bayesian Thinking
Episode 67 on The Social Credit Scores in China
Episode 21 on Probability
Episode 0 on Bayes Rule

Transcript

Max Sklar: You're listening to The Local Maximum, Episode 170. 

Time to expand your perspective. Welcome to The Local Maximum. Now here's your host, Max Sklar. 

Max: Welcome everyone. Welcome. You have reached another Local Maximum. It is now May 2021. Big week this week in cryptocurrency, of course, Ethereum is pumping, Ethereum Classic is pumping. Dogecoin, what happened to Dogecoin? That's so crazy. I don't know about you. But I actually watched Saturday Night Live for the first time in five, six years to see Elon Musk, you know, be the host. I think most of the sketches were not that great, but what can you do? It's actually part of sketch comedy, where a lot of the sketches don't work out, it's sort of par for the course. But only the good ones get remembered. Although there are people who say SNL has gone downhill, maybe they're right. But anyway, Dogecoin, he’s been pumping Dogecoin for quite a while, months and months. And, you know, if you buy the rumors, sell the news, well, Dogecoin took a bit of a dump during the Saturday Night Live performance. 

What is Dogecoin? A lot of people, some people I know are starting to get into it as an alternative to Bitcoin and Etherium. Look, Dogecoin is, it's a joke coin. And it is, they can print as much as they want, or create, generate out of thin air as much Dogecoin as they want. So, do you really, where is the value there? I don't know. But it's, you know, memes can guilt, can shoot up, can shoot down, some people will make money. I don't know. It's a crazy world out there in crypto, a truly crazy world. 

I don't have any Dogecoin. I did try mining Dogecoin years and years ago, but I know I got less than one, so. So one Dogecoin is less than $1. So unless it goes to like a really high amount, I will not be upset that I threw it away. Alright. So we have a couple things to talk about today. I want to talk about some of the news from the European Union and what they think about Bayesian inference. Not really what they think about Bayesian inference, but how it was referenced in one of their regulatory documents. I know regulatory documents, not that interesting. Bayesian inference, very interesting. So we're going to talk about that. And then finally, we're going to get into a little bit of a network or graph algorithm, when it comes to machine learning, artificial intelligence, and how that's different from traditional machine learning algorithms. I think it's a really interesting topic, and I hope you enjoy. 

So first, let's talk about the EU. So the headline that we got from StatModeling at Columbia: EU proposing to regulate the use of Bayesian estimation. So I see a headline like that, and I really hate the idea of regulating math or AI. And it sort of brings to mind, I think it was an episode of Numberphile I saw once, and I could not find the exact example. But it was, the UK wanted to propose a regulation on cryptography, which is really just mathematical transformations, taking one set of data, putting it through a transformation, and taking another set of data. And they proposed making X, Y, Z illegal. And then some cryptographer from the university says, this literally makes my book on cryptography an illegal book. So how absurd is that? But I want to look into what they are doing. Because, so first of all, well, if you're trying to regulate this stuff, you're like a bull in a china shop, you're going to knock over all, the law can become. If you think about it, if you make, try to make certain aspects of math illegal, the law is going to become so complicated, that it's hard to tell exactly where they're going to come down. 

But sometimes these Commissions, there are humans there. Sometimes they do sit down and say, okay, let me come up with some good ideas. Even if I personally oppose imposing those ideas on everyone, it's interesting to see what's happening here. Do they have any ideas and what is a good practice when it comes to implementing AI, or essentially, advanced statistical algorithms? Because I have to say, you know, a lot of times when you think about, you know, AI is dangerous, they’re not talking about Terminator. They're not talking about, as I've heard, put it before, some robot is stocking the shelves and all of a sudden they decide to kill all humans. No, they're talking about statistical algorithms and statistical decision making, and when that's appropriate, and when that’s not appropriate, or when that's going to be legal or when it's going to be not legal. Our brains use statistical models, essentially, in order to identify what we're seeing, for example. The visual cortex, I hope they don't rule that our visual cortex is illegal, because then people, we will all have a problem, a big problem. But let's see what they say. 

So the European Commission, because there are actually some good ideas here, the European Commission just released their proposal for a regulation on a European approach for artificial intelligence. And as this points out, they get to a definition of artificial intelligence. And I'll just read it out loud: “Artificial Intelligence system (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I, and can for a given set of human defined objectives, generate outputs, such as content, predictions, recommendations, or decisions influencing the environments they interact with.” So already, we're talking about something, I think the key word is ‘decisions’, obviously, ‘recommendations’ and ‘predictions’ all kind of feed into decisions. So it's essentially decisions that are based on statistical algorithm, automated decisions. And so I think there's a valid question here of like, what's a good way to do it? What's a bad way to do it? When do you want a human in the loop? 

Included in the definition of AI that they listed on that annex, one thing that they say is that statistical approaches, Bayesian estimation, and search and optimization methods. So essentially, Bayesian estimation, and I've gone over Bayesian estimation a lot on this podcast, I don't need to kind of define it again. Well, define it quickly. But if you're interested in what Bayesian inference is, definitely check out Bayesian thinking I think that's episode, is it 78? Let me make sure, www.localmaxradio.com/78.

Whoo. Internet is a little bit slow today. Yeah, so Episode 78 is on Bayesian thinking, Episode 21 is on probability, I have a lot of ideas and probability. And of course, like just those two first episodes, zero and one are also on Bayesian inference. So I'll link to all of that in the show notes page, www.localmaxradio.com/170. But essentially, Bayesian inference is taking a probabilistic model of the world — hey I think the world might be in, I think the world might exist in these probabilistic processes, and there are several of them. And I have an idea of which ones are more likely and which ones are less likely. Then, as I gather more data, I can zero in on which models of the world, or which model, and usually it's a specific question, which models have a specific question that I'm trying to answer. More exactly, which answers to my question are more likely, and which answers to my question are less likely? 

So in other words, it's just learning from data. It's taking data and saying, okay, this means that I used to think X, but now the data contradicts X and it makes Y look a lot more likely. So now I'm going to start to believe Y more than X. So that’s Bayesian estimation, even if you don't know probability theory, or even if you don't do Bayesian estimation, you're kind of doing it subconsciously all the time, where you form a belief, and then you kind of decide. Or maybe you start off open-minded, and you think there are several possibilities. And then, as information comes in, you eliminate some of them.

And I think in Episode 105, I was talking to a mathematician. Oh, what's her name? Oh, my God, I forgot. That's really bad. Hold on. Sophie. Sophie Carr, right. Sophie Carr, in Episode 105, talked about it as a giant game of ‘Guess Who?’, where you have many ideas on what the answer could be, and you're constantly eliminating them. Even children do Bayesian estimation. So you can't make it illegal, that's not what they're trying to do. But they are including it as a technique that could be regulated. What does that mean? According to an article from the Brooklyn Brookings Institute that I read, what's a risk from AI the EU is talking about here? So, again, they do claim that this, not again, but one of their claims is that this will outlaw a Chinese style social credit system, and I hope that's true, in that you can't have like a society wide system run by the government that scores each individual person based on a non-transparent statistical model, and says these are good citizens, and these are bad citizens. Now, even a transparent statistical model, I wouldn't want to do that, because that is central control. That's total control, totalitarianism, in terms of telling people, this is exactly how we want you to behave. And this is exactly how we don't want you to behave, which is the exact opposite from one of the phrases in the US Declaration of Independence, pursuit of happiness, that's, you know, whatever, it's kind of up to each individual on how they want to pursue happiness. 

So it's going to kind of, it might outlaw some of those things. But I'm also concerned that could, you know, it could regulate certain things that might be good, for example. So in the United States, insurance is highly regulated, in terms of what statistical models you're allowed to use, maybe you can say, well, that's good, because we want not discrimination. But on the other hand, maybe the most optimal model is not allowed to be used for insurance that increases our insurance rates. So that's a possibility as well. 

Specifically, one of the things the Brookings article mentions that these regulations would tell us is that it would declare deep-fakes and doctored photos, not illegal, you could do them. But deep-fakes is kind of considered to be a video that looks real, created by AI. And doctored photos, while you've all seen Photoshop. And essentially, you have to just declare that this was artificially altered by AI.

How do you take a regulation like that and fairly impose it? How do you make sure that everybody is playing by the rules here? Because I am, I'm not sure if you can probably make a doctored photo and just not tell anybody, and put it out on the internet. And unless it's something completely ridiculous, how would anybody know that you're breaking the law? So that's one thing that I'd be concerned about. So there, and so the one thing that they want to regulate is just information, they want us to know where this information came from, which kind of sounds good on the surface. And also, the way it's being sold is they want transparency, they want humans in loop for some things in terms of decision making. So does this make sense in many cases, for real world AI systems to ask these questions like, hey, if I make a statistical model, a machine learning model, I say, I want transparency, and when the final decision is made, I want humans in the loop? 

Oftentimes, for many cases, it does make sense that yes, when you're building these kinds of things, you do want these things, they're good idea. Doesn't necessarily mean that I support imposing those good ideas, and everybody, but I think those can be good ideas. Although sometimes you don't want transparency, like I said, because some machine learning algorithms, like deep learning algorithms can be more effective, more accurate, but oftentimes non-transparent. And sometimes, that's the trade-off that you really want. But the trade off thing is interesting, I don't know, if, for example, for designing a visual cortex for designing essentially, a machine that can identify objects, those can be very, very complicated. And oftentimes, if you impose on it, well, we want it to be transparent, we want to know exactly what the machine is doing, then we're not going to have as good of a system. And so for something like self-driving cars, well, you could go either way: I want the best visual system, because I want the car to be the safest. On the other hand, if the car does run into trouble, I want to be able to figure out why that happened. So it's a trade-off that is not exactly clear what you should do. And it's something that we'll have to figure out over time.

I did talk about timestamping a few episodes ago, man, I should just pull up my entire episode archive right now because I am bringing up so many old episodes. That would be Episode, that'd be the episode of Wordproof, Episode 167. Where the claim is that maybe if our social media sites or hopefully our new information sharing sites that are decentralized require time stamping, then we can figure out, I don't know if we can figure out whether photos are doctored, but we will have some transparency in terms of, you know, when information was derived, if that makes sense. 

So, all right. Today we're going to talk about, so what do you think about regulating AI? What is your sense of what problem the EU is trying to solve? GDPR, has it been going well? Has it not been going so well? And what do you think about this, go to www.maximum.locals.com to go to our locals page and share your opinion or email the show, localmaxradio@gmail.com

Okay, so today, we're going to talk about machine learning algorithms that are based off of a network or a graph. So what's a network or a graph? And this gets into like real nerdy computer science stuff, or real like low-level computer science stuff. But a graph is kind of a collection of nodes, with connections between them. A good example would be on something like Facebook, or Foursquare, of course, when you have a friends graph. I could be friends with you, I could have 100 friends, maybe you have 100 friends. It's not like everybody has one connection, or everybody has, it's a very, very freeform data structure, because anyone could have any number of friends. 

When it comes to machine learning algorithms, usually, let's say, I want to understand something about a person. And I might look at information about that person, like, where they live, what words they tend to type, their height, how many pictures they post, all that sort of stuff. That is kind of base layer features that I could use to determine something about that person. Let's say I want to determine their political leanings, like how likely I think they are to vote for my candidate if I send them an ad. There are a lot of political, a lot of politicos who want to do something like that. So okay, I could use features to try to determine whether that person is in that group. But what if we want to make use of the graph? What if we want to make use of who their friends are? I could come up with kind of first-order features on that: how many friends do they have? How often do they add friends? That kind of thing. And those can directly be used for features to learn their political leanings. 

But then, of course, you're like, wait a minute, doesn't the political leanings of the, don't the political leanings of the friends, influence the political leanings of the person? And this is for, not just politics, but this is all marketing. So the answer is, of course, yes. But then there's kind of a question of, this is not a single feature. I'm trying to learn a feature of a single person, but then they have 100 friends, and all of them have features. And theoretically, all of those features can be, it can be filtered down into the single friends to try to figure out what the answer is for that friend. So for that is, it kind of gets overwhelming, if you're just trying to look at it in terms of, hey, I have features, and I'm trying to learn an answer. 

So what you really need is a graph algorithm where essentially, there's information at each node. So each person has information. And then all of them also has information that gets shared back and forth across the friends graph, or across any graph that you have. Let me give an example right now. One example is PageRank. And some of you who are in tech will have heard of PageRank. It's fairly old now. I want to say, maybe not 30 years, maybe 25 years old. So this is from Google, this is from Larry Page. So it's about web pages, but it's not about I don't think it's named for web pages. It's named for Larry Page. It's kind of funny that the founder of Google is named Page. But that is there's a lot of coincidences like that. 

So this is back in the ‘90s. This was Google before it was big tech, before it was, dare I say evil. Even before it declared itself not evil. This was just a few grad students trying to figure out how to find the best sites online. Well, yes, they were well connected grad students at Stanford, so they were very well positioned to start Google. But there was no guarantee that Google was going to succeed, because there were so many search engines there in existence, and I think a lot of the investors at that time would have said, you want me to invest in another search engine? Are you kidding? There's so many of these. 

PageRank uses a graph essentially, starts to the realization that the internet is and was at the time, a graph of pages, because each page has links to other pages. Every page has inbound links, that are websites that link to it. It also has outbound links, sites that it links to. The idea was that more important websites have more inbound links, less important websites have less inbound links. Somehow, my importance is related to the importance of all the sites that link to me. 

Now you can't calculate this all in one go. Because if you think about it, let's say I linked to, let’s say website A links to website B, and website B links to website C, and website C links to website D, and website D links back to website A, you get a circle of links. And that can actually be a problem for PageRank, because that's called a link farm. And early Google had to really deal with that. Because, you know, because it would make these pages look like they're important, when really they're not, they're kind of, they're all bad pages that link to each other. 

But PageRank did work pretty well in its first iteration. And what it would do is it would sign an importance to each page, and then it would say, okay, after each step, I am going to keep something like 15% of my importance. Then 85% of my importance score, I am going to send out to my outbound links equally. If I linked to five pages, I'm going to send those importance points to those five pages. And in each step, I'm sending out some score, and I'm receiving some score from pages that link to me. Each step should contain an equal number. 

There's a conservation of importance score. If I start with a million pages, and they're all initially signed a score of one, in the first step of this algorithm, there's a total of a million and the second step and third step, and so on, there will also be a total of a million because we're, people are sending part of their score out and receiving some of their score from other pages, and therefore, you're not destroying any score. But over time, what happens was, eventually, if you keep on calculating this and doing this, taking these steps over and over again where I'm sending out score, and I'm receiving score, eventually, you reach some kind of an equilibrium, where every node is sending out the same amount that it's receiving. And what happens is, this gives a PageRank score, which is the importance of each page. 

I think that the 85% was not arbitrary, it was determined to be about right. That’s sort of what they, it was kind of arbitrary, but it was determined based on the internet, this is the one that worked pragmatically the best. But basically, it's equivalent to saying, okay, I start on a random web page. For each web page, there's a 15% chance I stop and an 85% chance I click one of the links, and I keep on going until I find, until I stop. And then when I stop, what web page am I at? The web pages you're more likely to be at rank higher, and the websites you're less likely to be at rank lower. Now, this seems very simple. Again, it can be gamed pretty easily by link farms. But this algorithm beat all the other algorithms in existence beforehand, that were not based on graph algorithms, that were based on simple like, you know, just features, usually it’s words, and trying to score the importance of a site that way. Whereas this way turned out to be an order of magnitude better than other search engines. And that's why Google ultimately won out because they were using sound computer science, even though it wasn't a perfect solution. It was a much, much better solution, and it's scaled beautifully well until today. 

That might be changing soon, but that's why Google became very, very successful. They contributed something real to the ecosystem. And not just real, but real big. So, okay. Again, let's take this back to a machine learning situation, you could say, hey, what can I do if there's a network or a graph included in something that I'm trying to learn? And for those of you who don't do machine learning, let's think about how you could use this in your life. Well, like PageRank, maybe you can assume that there's a back and forth relationship between the nodes on the graph. So maybe you can say, hey, if I have a friend graph, I'm being affected by my friends, and I am affecting my other friends. Maybe that will encourage you to encourage your friends in a positive way, and, and maybe even pick the right friends. 

And if you have any graph in terms of any machine learning problem that you are coming up with, let's say at Foursquare, we often talk about users of Foursquare. We could talk about people who go to the same places. We could talk about people who are literally friends in Foursquare, we could talk about things like that. We can say they're kind of similar to each other. There's a lot of connection here to learning by analogy, which I've talked to in the past before, where you assume that things that are close by are analogous, and therefore they should be similar in terms of the problem that you're trying to resolve. 

But one of the interesting things about these graphs is that it's not always the case that you're similar to your neighbors on the graph. Sometimes it is, I think maybe for websites it is, where websites are, important websites link to other important websites, and they don't link to less important websites. Less important websites, kind of link randomly, and maybe don't tell you which side is important. Maybe there's some similarity metric there. 

Maybe there's some kind of similarity metric to your friends graph in real life, although I think it's a lot more complicated than that. Because your friends aren't necessarily similar to you. And most people have different groups of friends, which are similar in one area, like maybe you went to the same school, or you're interested in the same hobby, or something like that, or you live in the same building. But you can be very different in other areas. So it’s not like your friends from grad school are more likely to have the same politics as you. Or your friends from work are likely to enjoy the same food. So the friend graph is a lot more complicated. 

But an interesting counter-example where matches don't necessarily equal specific similarity is in dating. So I actually read a paper about five, six years ago, and I don't know if anyone actually dates like this. But they used the example of a dating app to show how matches and friends are not necessarily similar to each other. For example, if I am trying to play matchmaker, and I find one person who would be a great match for you, and another person who would be a great match for you, that doesn't necessarily mean that those two people would date each other. Probably not, if we're talking about just heterosexual dating, and you're a woman and I find two men who would be good matches, they would probably not be matches for each other to date each other. 

So that is something called, so they identified something called split complex numbers, which is used as a model in this situation. I found this fascinating again, I don't know if anyone who is spending their time analyzing dating using split complex numbers actually gets a date, but we'll see. But the interesting idea. So for those of you who are familiar with complex numbers, there's the real numbers, you know, positive numbers, negative numbers, and then you have this number i, where i times i is negative one. 

Split complex numbers is similar. It also has an i, except in that case, i times i gets you back to one. And the idea is that, people who are connected, so everybody, every two people has a score that is related to each other. If you are similar to someone, and that's usually someone of the same gender, then your score will be close to one. If you're different from somebody of let's say the same gender, again, I'm talking about opposite-sex dating, you are ranked negative one. So if you're on the real axis in similarity with someone, then you're likely to be the same gender. 

Now, if you are a good match, if you're a good dating match for someone, you would, your score with relation to each other would be i. And then, of course, let's say I would match with you, and someone else would, let's say, like, I'm looking for matches for somebody, for Person A, and I find a match with B, and C. B and A match have an i score, and C and A have an i score, then you multiply them together to get the score between B and C, which is one. And essentially, that means that if you have two matches, or multiple matches, that will be good matches to you, they're not matches with each other, but they're similar to each other. And if I have a good match for you and a bad match for you, then those two people would probably be different from each other. 

That's a really interesting example of where you might have a graph where you're not necessarily making the assumption that nodes that are connected are similar, but they're related in some way. And so there's a whole class of algorithms and thought experiments that you can come up with, that look at these ideas. I'm going to link to that paper, I'm going to look at who wrote that paper. That is, okay, so that's from the Institute, University of Koblenz-Landau in Germany. So I'm not even going to try to pronounce these names, folks. It's from far away, but it's very interesting. 

Um, alright. That's all I have for today. I know, it's a little bit of a scattered group of new stories today. But I am looking forward to this summer. And we are going full steam ahead Local Maximum, I got some really good, I'm going to probably hopefully talk to Aaron again next week. And we are going to roll into some more interviews. Yeah, it's, I have a really great feeling about the rest of this year and this summer, I can really ramp up what we're doing in The Local Maximum. I feel like the last year and a half with the pandemic and everything that's been going on has been really tough. It's been a pleasure keeping this podcast alive. Now that we've survived up to this point, I think it's time to expand once again. So if you want to help me do that, check out my Locals, www.maximum.locals.com. Have a great week, everyone. 

That's the show. To support The Local Maximum, sign up for exclusive content and our online community at www.maximum.locals.com. The Local Maximum is available wherever podcasts are found. If you want to keep up, remember to subscribe on your podcast app. Also, check out the website with show notes and additional materials at www.localmaxradio.com. If you want to contact me, the host, send an email to localmaxradio@gmail.com. Have a great week.

Episode 171 - Ransomware, Security First, and Crypto-corporations

Episode 171 - Ransomware, Security First, and Crypto-corporations

Episode 169 - Basecamp’s Boundary on Politics

Episode 169 - Basecamp’s Boundary on Politics