Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

Episode 146 - Math, Language, and Intelligence with Tai-Danae Bradley

Episode 146 - Math, Language, and Intelligence with Tai-Danae Bradley

Did you know that a coffee mug and a doughnut are the same things? That might be obvious to a skilled topologist but may seem strange to most people. Topology might seem complicated, but Tai-Danae Bradley writes it out and explains it to make it easy to understand. Her math blog, Math3ma, also covers topics like quantum probability.

In today’s episode, math blog creator and Ph.D. mathematics graduate Tai-Danae Bradley helps us understand topology and how it relates to language. She also shares fresh perspectives on language modeling, its uses in artificial intelligence, and the application of quantum probability to machine learning.

Tune in to the episode to learn more about the future of mathematics!

About Tai-Danae

 
 

Tai-Danae Bradley is currently a postdoc at X, the Moonshot Factory (formerly Google X). She is also the creator of the math blog Math3ma and a Ph.D. mathematics graduate from the CUNY Graduate Center.

Tai-Danae is a co-author of the book Topology: A Categorical Approach.

You may connect with Tai-Danae on Twitter, Facebook, and Instagram. To learn more from her mathematical expertise, visit her website, Math3ma.

Sponsor: Manning Publications

Enjoy free one-day live online technology conferences with Manning! Learn from the live@Manning Conference: Math for Data Science on December 1, 12–5 PM EST, live-streamed on their Twitch channel.

 
 

Here are three reasons why you should listen to the full episode:

  1. Gain a perspective on how topology works.

  2. Learn how to model language using mathematics.

  3. Discover how quantum probability is applied to machine learning.

Resources

Related Episodes

Episode Highlights

Background of Topology: A Categorical Approach

  • The book has two authors in addition to Tai-Danae: Tai-Danae's thesis advisor in graduate school, John Terilla, and Tyler Bryson, one of John’s graduate students.

  • According to Max, the book is a lot clearer and more interesting than other graduate textbooks in mathematics.

  • Max recommends the book for math or topology enthusiasts, but not for a general audience.

What Is Topology?

  • Topologists are trying to understand when two things are the same, up to an appropriate notion of sameness.

  • A famous topological example is how coffee cups and doughnuts are the same.

  • Topology is also trying to understand when things are not the same, based on specific chosen qualities or properties.

  • Before beginning a topological discussion, you have to choose the appropriate topological properties.

Parts of Topology that New Learners Have Trouble Understanding

  • The actual definition of topology can be discouraging.

  • Topology sounds fun, but it's much more than what most people expect.

  • Topology is very abstract; it is far from the calculations in calculus or linear algebra. There is a mathematical culture shock.

Coolest Results in Topology for Tai-Danae

  • Tai-Danae's favorite result is the Brouwer's Fixed Point Theorem because of its proof.

  • The proof illustrates how category theory helps in simplifying complicated topological ideas.

  • If you transport something difficult into an easier language, it becomes easier to solve.

  • You can see the theorem at work by continuously smushing a disk or swirling coffee in your cup!

Understanding Language through Mathematics

  • Language is like algebra in that it is compositional.

  • There are also statistics in language because some word combinations occur more frequently than others.

  • Language can be a physical guide for determining mathematical structures.

Statistical Models of Language, Understanding the Meaning of Words, and A.I.

  • While language A.I. bots can do amazing things, they might not fully understand words.

  • If you want to understand a word's meaning better, you should know about its environment or the context in which it appears.

  • A mathematical theorem called the Yoneda Lemma has the same concept, but with mathematical objects instead of words.

Applying Quantum Physics Using Representation Theory

  • If you translate your problem into matrices, then each word will be assigned a matrix. The matrix for a phrase is the product of the matrix of its words. 

  • You can use matrices to determine probabilities. This concept is widely used in quantum mechanics.

  • When you zoom out of the product of your matrices, you can interpret it as a tensor network.

  • We can build machine learning algorithms based on this concept. Doing this is a topic of active research.

Tai-Danae’s Accidental Popularity

  • Tai-Danae started her blog as a method of learning.

  • Her solution to overcoming the fear of topology is to make sense of the questions that need to be asked.

  • Writing topology in the way she wanted to be taught was both for her and other students who might be in the same boat as her.

5 Powerful Quotes from This Episode

“There's a lot of math books out there that are not as clear or straightforward . . . it doesn't necessarily have to be that difficult if you organize it correctly. Right?”

“Sometimes thinking about topology can be kind of difficult. But if you can transport your difficult problem into easier language . . . maybe like group theory, algebra, you can solve your problem there and then transport it back.”

Tai-Danae's interpretation of the Yoneda Lemma is based on a quote by John Firth: “You shall know a mathematical object by the company it keeps.”

On having a toolset of established literature: “I think that could be really helpful, especially when developing a product. People would like to have faith in their models before they use them in some way.”

“So I would have to fight through that dense fog of jargon and formality to make sense of the math for me personally so that I can understand and do well in school.”

Enjoy the Podcast?

Are you hungry to upskill by learning more about topology, language modeling, and quantum probability? Do you want to expand your perspective further? Subscribe to this podcast to learn more about A.I., technology, and society.

Leave us a review! If you loved this episode, we want to hear from you! Help us reach more audiences to bring them fresh perspectives on society and technology.

Do you want more people to understand topology, language modeling, and quantum probability? You can do it by simply sharing the takeaways you've learned from this episode on social media! 

You can tune in to the show on Apple Podcasts, Soundcloud, and Stitcher. If you want to get in touch, visit the website, send me an email at localmaxradio@gmail.com, or follow me on Twitter.

To expanding perspectives,

Max

Transcript

Max Sklar: You’re listening to The Local Maximum Episode 146.

Time to expand your perspective. Welcome to The Local Maximum. Now here’s your host, Max Sklar.

Max Sklar: Welcome, everyone. Welcome. You've reached another Local Maximum. Good to be with you today. Today, I want to share with you a really great discussion. I am so excited to have this episode out. This was a real mind expanding discussion, both the research I did for it and the discussion that I had for me. This is a discussion with a mathematician I've been following for a few years who's online. 

As someone who's been thinking about, basically artificial intelligence for many years, I'm always looking for a new way to approach it. And I dove into the work of today's guest, Tai-Danae, and I think she really has some interesting insights into it. And I was like, gotta talk to her. And we finally got it done after a little bit of while. 

If you're not a mathematician, I want you to stay. And you know I would tell you if this wasn't for you. But you're gonna get insights from this discussion that you won't get anywhere else. And you're gonna hear—it's not just going to be dry and technical. You're going to hear us discuss some interest—whether it's like struggles in terms of understanding stuff, or like philosophical questions on the meaning of understanding and intelligence. All of it’s interesting I think, for a general audience, sometimes we go off the deep end here, but I think you'll enjoy seeing that too.

So we cover a lot of ground, we're going to start—we're going to talk about topology. But also kind of the trick to writing about a topic like that clearly, and I think there will be lessons to take away from that. We're going to cover the language of mathematics—or no not the language of mathematics—the mathematics of language. So Tai-Danae has things to say about it, and what that means for how we communicate, and how we think. And finally, we're just gonna dive into the idea of using the mathematics of quantum physics in AI. Is this possible? Something that I haven't seriously considered before and I suspect many of you out there want to know. You're not going to hear these ideas anywhere else, folks. So all right, there are a few announcements at the end. But I just want to get started for you today. 

My guest today is currently a postdoc at X, The Moonshot Factory, formerly Google X. I honestly thought when I saw X in the bio that it was just like a stand-in that somebody forgot to write in the organization. But no, X, it's a real thing if you might remember Google X. And she is also the creator of the math blog, Math3ma, and a PhD mathematics graduate from the CUNY Graduate Center. 

Tai-Danae Bradley, you've reached The Local Maximum. Welcome to the show. 

Tai-Danae Bradley: Hi, Max. Great to be here.

Max: So first of all, congrats on your new book. Here, I'm gonna hold it up. I know, this is not a—I got it to see Topology: A Categorical Approach. That's like 144 pages of honestly really hard stuff, because this is like graduate level stuff, isn't it?

Tai-Danae: Yes, it is.

Max: Yes. So, my degree’s in computer science. So I thought I wasn't gonna be ready for this. But I was actually I found it—I read a little bit of it—I found it to be very clear. And there's a lot of math books out there that are not that are not as clear or straightforward, which I think sometimes you could present really difficult topics. It doesn't necessarily have to be that difficult if you organize it correctly. 

Tai-Danae: Right. Yes. 

Max: So how did you come to put this together? Like, what was this project like?

Tai-Danae: Um, yes, well, first, thank you so much for your purchase. That's so cool to see you over there. That's really awesome. So yes, how did we come to do this? So there are three authors on that book, of which I am one. And one of the authors is John Terilla, who was my thesis advisor when I was in graduate school. And he had written already a collection of topology notes from when he taught the course at the CUNY Graduate Center. And decided along the way that he wanted to expand them or fill them into a book. Now, I like to write. And also, the other co-author on our book is Tyler Bryson, who was also a graduate student of John at the time. So you know, essentially all three of us got together and voila—the book. 

Max: So I see. So, you had the material and when you said you like to write. Now kind of starting to understand that's why it's actually a lot clearer than a lot of other graduate textbooks in mathematics might be, and maybe much more interesting. I think because you totally care about putting it together.

Tai-Danae: Yes, it was a lot of fun. 

Max: I know, not that other people don't care about putting together but I don't know. Some of that stuff is rough and you probably agree. But…

Tai-Danae: Yes, well. What’s fun is that the funny thing about having three co-authors is that we all sort of have a little different writing style. 

Max: Yes. 

Tai-Danae: And it's fun when my friends check out the book and they're like, “Oh, Tai. I know you definitely wrote that page. Like that. Totally sounds like you.” So we tried to have a uniform voice. But maybe it didn't quite get there all the way.

Max: That's actually pretty cool that people say that for a math book. 

Tai-Danae: Yes. 

Max: So all right. This book for math enthusiasts—to put up topology enthusiasts—this is great. It's probably not for a general audience. But I wanted to ask you, when somebody outside of the mathematical community asks you about what you're working on in terms of topology, and you attempt to answer them, what do you say? What are topologists actually trying to understand?

Tai-Danae: Yes, okay. So, I think there were maybe two questions there. Let me say the first one. If someone were to ask me what am I working on in terms of topology, I might say, personally, nothing right now. Because the research that I'm doing does not yet involve ideas of topology. Although one day it might and that would be very, very nice. 

But generally, what are topologists looking for? To answer your second question. There are a few things. Maybe the first thing that comes to mind. And this is a question not specific to topology, but all across different branches of mathematics. And that is trying to understand when two things are the same, up to some appropriate notion of sameness. And so in topology, I know you're familiar with this, people say two things are the same. If you can continuously deform one into the other, or that's why topology is sometimes called rubber sheet geometry. Or why people always use that famous example of coffee cups and donuts being sort of the same thing because you can smoosh one shape into the other shape. 

So this notion of sameness in understanding the right way to capture that idea and then to answer that question, that's a big thing. I think, also interesting, is trying to understand when things are not the same. Like I have two shapes. And I want to understand, “Okay, are these genuinely different with respect to the things that I care about?”

Max: Right. And so this is very abstract. So I want to stop here because you're talking about but with respect to the things that I care about. That depends on the field. And so topologists care about one thing, but in the broader category theory, that could be any number of different things. 

Tai-Danae: Yes, yes. Yes, exactly. So when I say things you care about there are maybe topological spaces. And if that's too abstract, just think of things. There's things you're interested in. And maybe they have some properties, and that's why you're interested in them. Like, I really like dessert. Why? Because it has the property of sweetness. And that's something that I think is just delicious. And maybe I like that property over like whatever properties mushrooms have. I don't really like mushrooms. 

Max: Yes .

Tai-Danae: I want to know, like, “Hey, what other things are sweet and delicious?” Oh, like this class of things called desserts. See, is that…? 

Max: Right. Yes, yes, of course. But you know donut is a dessert. But you know cookie is. But topologists wouldn't go for that.

Tai-Danae: Right. Right. Right. Okay. So then, right? So since we're having a conversation about topology, we have to ask, okay? What are the appropriate properties for this discussion? We're not talking about food. I mean, I was.

Max: Gotcha. Yes, I know. I do a lot too.

Tai-Danae: But yes. So now we're having a conversation about topology. And so we have to know what are the properties that we're interested in. And then these properties have fancy technical words like, I don't know—connectedness, Hausdorff property, compactness, and things like that. So there are things that people are interested that are appropriate in that discussion, namely, topology. So once you have these properties, then you can ask, “Okay, given that that's what we're interested in now, let's have a discussion about things for instance.” 

Max: Right. What parts of topology two people have trouble understanding when they first dive into it? 

Tai-Danae: Oh, I would get—

Max: I tell you what mine was. 

Tai-Danae: Yes, please. 

Max: After you're done. No, you go first. 

Tai-Danae: I'll go first, okay. Well, we'll see if they're the same. I would say, probably, the actual definition of a topology. Because if you hear two people on a podcast talking about cookies, and donuts, and abstract things, you're like, “Oh, maybe this sounds kind of fun.” Maybe some of your listeners are doing work in machine learning or artificial intelligence, and they heard of topological data analysis, right? You hear these kind of words and you're like, “Oh, I want to learn this.” So then you go to Wikipedia, or you go to your university's math library and check out the famous book and topology, and you turn to page one, and you see definition, “A topology is…”

Max: So I'm laughing because this is so—this is pretty much what I would have said.

Tai-Danae: Okay. Yes, you look at the definition. And it's like this dry list of axioms that have nothing to do with the fun stuff you heard on the podcast or…

Max: Yes, it has nothing to do with shapes.

Tai-Danae: Yes, it has nothing to do with shapes. And then at that point, it can be very discouraging, because you're like, “Oh, this is not what I thought it was getting into. How long does it take to get to the fun stuff?” So I think that could be one barrier. Is that sort of what you were thinking?

Max: Yes, yes, I know. And it's also—it took me a long time to figure out like, why is that the definition? And I'm not even going to go into the definition here. It's not like long, the definition of a topological space. But it's like, a lot of math books, just throw it out there like, “This is a topological spaces.” Like why? Why would you do that? How come you could take infinite unions for that infinite intersections? I don't get it. 

Tai-Danae: Yes, yes. And maybe another thing, which we already touched on is that, it's just very abstract. So if the math that you're used to is more computational, maybe you did really well in calculus, maybe you did a little bit of linear algebra, and you're like, “Yeah, this is great. Let's do more.” And then you get smacked in the face with topology, just very abstract. The “computations” are very different than what you do in calculus. It just feels it's like a—I don't know—a little bit of mathematical cultural shock depending on where you come from.

Max: Yes, yes. I want to move on in a second. But before we go on, what, in your opinion—and this question popped into my head—so what in your opinion is like the coolest result in topology? Or the coolest result that you can share easily?

Tai-Danae: Oh, yes. I think my favorite result is probably something called Brouwer’s Fixed Point Theorem. And I like that result because of the proof. Now, maybe I'll just say for the listeners, there are lots of places on YouTube that have really great, exciting, easy to understand videos about this. But let me just tell you why I like it. 

The proof of this—well, there are several. But the one that we show in our book that you're holding, illustrates, I think, one of the most amazing ways possible why category theory, which is sort of in the title of our book, is something that's married so naturally with topological ideas. So category theory, like wow. If you thought topology was abstract, category theory is this even more abstract branch of mathematics. But what it essentially does is it provides a bridge between topology and another area of math called algebra. And it turns out that sometimes thinking about topology can be kind of difficult. But if you can transport your difficult problem into easier language, or language where you're like, “Oh, yeah, I feel really good and comfortable about this,” maybe like group theory algebra. You can solve your problem there and then transport it back. 

And so in this proof of this theorem that I like, it sort of puts this transporting idea on display using simple language and category theory. And I think that's like a great commercial, mathematically-speaking, for this sort of power of these categorical ideas and topological ideas coming together.

Max: Would it be possible to tell us what Brouwer’s Fixed Point Theorem says?

Tai-Danae: Yes, yes. So essentially, it says, if you have a—the word is a disc—so just think of a circle that's filled in. 

Max: Okay.

Tai-Danae: And you sort of do any kind of continuous smashing of it, there's always going to be one point that that ends back where it started. So maybe I'm describing this in two dimensions. But I think the usual three dimensional explanation is the nicest. So if you have a coffee—a cup of coffee and it's still, right?

Max: Yeah. 

Tai-Danae: And then you take your spoon and you swirl it around. And then you let everything come to rest again, the theorem says there's always going to be one little molecule that ends up exactly where it was before you stirred.

Max: Right. And that's not very clear. But it's true.

Tai-Danae: Right.

Max: But it has to be.

Tai-Danae: Yes. 

Max: Okay, I want to turn to your work on language modelling because that's something I have some actual experience with. I've talked about it before here on the show. Some of the Foursquare data sets, which is real data sets. We didn't do anything. What's always interesting about these real world set-ups, we don't do anything where each part is not that sophisticated but it gets put together in a very complex way. 

But some of the parts of your approach here, and I'm talking about your paper, language modeling with reduced densities. But I know, I'm sure you've done other stuff. So you could talk about whatever you want. So your approach here is new or new to me, and I want to explore it further. But before we begin, I want to know more about your approach to understanding language through mathematics because you have a video entitled Languages: Algebraic And Statistical Or Compositional And Statistical. What do you mean by that?

Tai-Danae: Yes. Thanks for the question. Yes. So the video, I think, the title is At The Interface of Algebra and Statistics. But I do use language as the primary example. So maybe just—I can give some quick background for listeners. 

So I'm a mathematician and I'm very much interested in algebraic and statistical structure. So algebra is a thing in math. Statistics, probability are a thing in math. But what if you have an algebraic structure that's mediated by statistics? What's at the interface of these two things? As far as I know, that's not mathematics that's really been explored, even though it's all around us in particular language. 

So language is algebraic or compositional because you can take two words and stick them together to get a new expression like red, fire truck, stick them together—red fire truck. So that's a little bit like multiplication or concatenation. This is like an algebraic structure. But there's also statistics involved because some of these multiplications or concatenations occur more frequently in language than others. I think I don't have to give…

Max: Right.

Tai-Danae:  Well, I think people know. But the one I use in the video is orange fruit is a thing that we can say. And orange idea is also a thing that we can say. But one of those phrases occurs less frequently in English, and that actually contributes to the meanings.

Max: Right. I mean, I guess you can have an orange idea. But I don’t…

Tai-Danae: Sure, why not.?

Max: Right, right. No. But I'm thinking, there are certain things where they both exist, where it would still be less like—okay, red fire truck versus purple fire truck. There's no reason why you can't have a purple fire truck. It's just that's a real world experience.

Tai-Danae: Right, right. And I guess the probability of someone saying, “I saw a red—a purple fire truck speeding down the street,” is slightly less if you…

Max: Sure.

Tai-Danae: Purple with red.

Max: So you've got to understand—you've got to have like, generally, I don't just...

Tai-Danae: Yes, this is difficult. This is very difficult. But anyways—but as a mathematician, I'm just interested in the structure. What is it? And I like to think of language is kind of like a guiding example. I really personally enjoy mathematics that is motivated by sort of real world physical observations. I think that's a really exciting place to be in. So in that sense, I think language is a good example of this structure that I think deserves more investigation.

Max: So I've spoken about the baseline for models here. As I've said, specifically, assigning probabilities, different words, and phrases—you just talked about that. One of the things that I keep trying to discuss, which is not always that clear, is what is the difference between statistical models of language, and actually understanding the meaning of the words? Or is there a difference? So I find it’s not always straightforward. And I just wanted to know how do you think about this question?

Tai-Danae: That's a very, very good question. I will say, my opinions or thoughts are probably not great because I know that a lot of people spend a lot of time…

Max: You're selling—you're selling yourself short, though. You have a perspective here that I don't have and a lot of people that listen too don’t have. Please share. 

Tai-Danae: I appreciate that. Okay. But before, I don't want to get in trouble with the people that actually think about artificial general intelligence and natural language understanding for decades. But…

Max: Well, I do. Well, okay. Not decades, maybe like a decade. But you're not going to get in trouble with me. 

Tai-Danae: Okay, good.

Max: All right.

Tai-Danae: All right. So let me say, I guess two things come to mind. One, when I think about this state of the art language models that are out there today, like transformer networks, they are amazing. I think they are probably not really understanding. First of all, I don't know what the right definition of understanding is, or meaning. But just like intuitively, I think there are enough examples where you play around with GPT-3, and you're—and it gives you some funny answer, and you're like, “Huh, that's weird. That's kind of cute.” But like clearly wrong, so I don't think they're quite understanding. But they are amazing. And they can do really, really impressive things. That's my first thought. 

My second thought, “If I could take this back to mathematics?” This question brings to mind a quote that you might be familiar with, and your listeners might be as well. This is a quote from a linguist named John Firth back in 1957, who said, “You shall know a word by the company it keeps.” And I like that a lot because it's kind of suggesting, “Hey, if you want to understand something about the meaning of a word, and maybe later on, you can make that understanding more principled and try to incorporate it in your language model, then you should know something about the environment or the context of the word.” So, “You shall know a word by the company it keeps.” 

I like to think of this as the linguistics version of a really famous mathematical theorem called Yoneda’s lemma or the Yoneda lemma, which essentially says the same thing. But for mathematical objects like topological spaces, as we were talking about earlier, and so forth. So it basically says, “You shall know a mathematical object by the company it keeps.” If I'm allowed to say it in that way. It's a little bit more technical than that. But it sort of says, if you want to understand a mathematical thing, like a topological space, or a vector space, or whatever, then it's enough to look at how that object interacts with all other objects in its ambient environment. So if you want to study that, you don't have to take a magnifying glass, and look at it, and its internal anatomy. You can zoom out, and just see its context, and how, and that gives all information. 

And so, it seems like, from both the linguistic perspective and the mathematical perspective, you can learn quite a lot about meanings of things by this sort of statistical, environmental, contextual perspective that I think people are already doing. So it seems—I don't know what the answer is to the question, but there's a lot of evidence for that's the right. 

Max: Yes, it seems like there's a school of thought where machines will understand language, if we just have more and more and more statistical models. And then there's another one saying, “No, we actually do need a semantic graph being generated.” And I don't really know. I don't think it's clear. I don't think it's proven. either way, or even if it's—even if we know the link to the question is not well formed or something like that. 

Okay. Oh, God, I have another mind bending question next, which I don't even know the answer to. So maybe you could help explain this to me. So you’re someone using some ideas in quantum physics to model probability distribution. And this is very new to me. I mean, I've seen some of the math for quantum physics before but I've never used it in a probability model. 

So I'm trying to wrap my head around this because I understand standard probability distributions, I think. I mean, that's hard for a lot of people to wrap their head around to begin with, but I'm not really sure what's going on here. My senses were adding more information but I don't know what that is. So maybe you can explain that. I don't know. So far, you've been very good. I'm like, this is gonna be tough because I'm having trouble.

Tai-Danae: Okay. Yes, this is a great question. So I'm going to try my best to answer this in an understandable and exciting way. So let's see. Let's see how this goes. First, let me just say for benefit of listeners—we were talking about algebra, statistics in language, and all the sudden, there's like this curveball. And now we're talking about quantum physics. What in the world? What is happening? 

Max: This is why I knew this conversation was gonna be so awesome. Go ahead.

Tai-Danae: Okay. So let me—rather than just just diving into quantum physics, let me actually gently lead us there. And let me do that by first just saying, I'm interested in algebraic and statistical structure. And I think we've given enough little examples to see why that should be interesting. 

Again, think of language, when you can take words and kind of multiply them or concatenate them together. That's a little bit like a group. I have, if you like symmetries or group theory. You have like elements and you can—you have a group operation, you stick them together. Maybe I have all that without inverses. I don't know what the inverse of red fire truck is. So let's throw out inverses.

Max: Yes.

Tai-Danae: Now, I'm going to make another analogy. When we were talking about Brouwer’s fixed point theorem. I sort of said, “Hey, sometimes problems can be difficult.” And so you want to transport them over into another field of math where things become easier. So a few minutes ago, we said, “Hey, let's take a topological space,” but actually associate to it a group. And that's kind of what the meat of the proof of Brouwer’s fixed point theorem does that I have in mind. 

I want to do something analogous here. Rather than viewing language as an algebra, or like a group without inverses, or a place where I can multiply things—that's great. But it turns out that doesn't really have all of the structure that you want. Let me not explain that. But let me just sort of say, if you actually—it turns out assigned to your little elements that you multiply together something like matrices. 

In other words, if you transport your problem into the world of linear algebra, now we're talking, right? Everyone likes linear algebra. I think. Many people like linear algebra, and especially if you want to do something concrete in the space of machine learning, linear algebra is where it's at. 

Max: Or it's common. Yes, there's a lot of good techniques there.

Tai-Danae: There's a lot of good techniques. And even from a mathematical perspective, I have in mind something called representation theory, where you represent some algebraic structure using major Caesar tools of linear algebra. So I'm going to transport my problem now into this space of linear algebra. So what that really means is, I have a word, and I want to assign every word a matrix. With the property that I, if I have like...

Max: So that's different from Google, like Word2vec, that usually associates the word with a vector.

Tai-Danae: Vector, right? 

Max: Instead of matrix.

Tai-Danae: Yes. So let me just say, the things that I'm talking about here are radically different than the current technology that's out there, right? So really thinking out of the box here. So what you can do is you can assign to each word, a matrix such that the matrix for a phrase like red fire truck is just—the matrix for red, multiplied by the matrix for fire truck. So in mathematical terms, this is called a homomorphism. So there's my algebraic structure is baked in. But I also want the probability and statistics to arise. 

And so what you can do is you can say the probability of a word, like fire truck to be something like, take its matrix, and then just multiply it by its add joint. Or if people are more familiar, its conjugate transpose. And maybe take the trace of that. Now, if that is—if those terminologies are confusing, let me just say it. Anytime you have a thing, and you multiply it by another thing, and get a real number out of it. So I took a matrix, I multiplied it by a joint, maybe I take the trace. It's a little bit more than that. Okay. 

Max: But it's not…I mean, I think we lost a bunch of people here. But it's okay. Let's keep going. Let’s find a conclusion here.

Tai-Danae: Anytime you take a thing, and you multiply it by another thing, and you get a real number out of it—that should make you think of complex numbers. I have a complex number, I multiply it by its complex conjugate, and I get a real number out of it. 

Max: Sure. 

Tai-Danae: Okay. So this idea of sort of squaring to recover probabilities plays a huge role in quantum mechanics. So there's your first kind of connection. The next connection is, and okay, now it's going to get technical by minute. I want to avoid doing that by just telling…

Max: Yes, go for that. Don't try too hard. I want you to be able to say what you need to say.

Tai-Danae: Okay. Okay. Great. So that's sort of the first intuitive connection to quantum mechanics. “I get a probability by squaring a thing that reminds me of complex numbers. Okay, there's one thing.” 

The next thing is when you zoom out, and you take a big breath, and you're like, “Huh, what did I just do? Oh, yeah, I have these matrices assigned to words. I get these probabilities by kind of taking the trace of the square blah, blah, blah. What is that?’ And you just ask, “Does this have a name?” It turns out, the information that you're dealing with, can actually be described by something called a tensor network. 

Now, I don't expect that many people that are listening have heard of tensor networks. So let me just say, this is a tool that historically is used in quantum many body physics, to understand the states of quantum systems. So if you want to understand the state of some quantum system, but you want to simulate it on a computer and do stuff with it, a tensor network is probably going to be one of the things that you're going to use. 

Maybe for people who are—we were talking about linear algebra—if all of that sound sounded fancy, just let me just tell you for the benefit of the listeners. A matrix is an example of something called a tensor. A vector is an example of something called a tensor. This is like—a vector’s like a one-dimensional array of numbers. A matrix is a two-dimensional array. But like why stop at two, right? 

Max: Yes .

Tai-Danae: You could have cubes worth of numbers. This is called a three tensor and so forth. So a tensor network is essentially just like matrix multiplication, but sort of in this larger dimensional array space.

Max: Yes. I mean, that's very common when you're dealing with data. 

Tai-Danae: Yes.

Max: There's always like…

Tai-Danae: Yes.

Max: There's always like all these different layers. You could have three fields to specify, and then all of a sudden, you have a three-dimensional. 

Tai-Danae: Exactly, exactly. Okay, this is a familiar thing. It just so happens that not only are these kind of higher dimensional, linear, algebraic ideas useful in machine learning, but they're also useful in physics, which is also not surprising because you have a whole bunch of small things interacting with each other. Kind of like, you have data, a whole bunch of data, and it comes together and gives you something interesting. 

So when we zoom out, and sort of say, “Okay, this is kind of what we want to see.” You'll notice that there is an established body of literature in physics that explores these kind of algebraic statistical structures. Not in the context of language, but in the context of physics. So hey, maybe we'll see what happens if we borrow some of those ideas and those techniques. And I think it would be helpful to say the—this idea that I'm describing—was first laid out in a paper in 2017 by John Terilla, Vasily Pestun, and Yiannis Vlassopolous. And in 2016, there was a really nice paper by Miles Stoudenmire and David Schwab. And I think they were some of the first to kind of spearhead this idea of taking tools from quantum physics and applying them to supervised and unsupervised learning problems. And so now, this is an active area of research.

Max: So do you think we could build any machine learning algorithms like in the real world but use it? Do you think it's practical at this point? 

Tai-Danae: Yes. Yes, I do. I was saying, this is active research, right now. There are people working on these things. I think it's practical. I think I'm not discouraged by the fact that really impressive state of the art language models exist already. I think that's actually very encouraging because what it suggests is, yes, there absolutely is algebraic structure, statistical structure out there. These language models are short of maybe accidentally stumbling on them. Wouldn't it be great to actually identify that and then sort of build a nice principled model that uses these ideas?

Max: Yes, what so what would your dream application be? If you had all this time and resources? Like what would you—what would you use this for first? Would it be the language model?

Tai-Danae: Yes, that's a good question. Well, since coming from the mathematics side, my dream dream would be to pin down the math. And then I'd pass that on to the people with applications in mind. But there's lots of really great applications out now. So maybe doing something like that. 

I really like—one thing about having a tool set of established literature is that you—there's really great potential for interpretability of these models. And so being able to have models, where you can say, “Oh, we know exactly why it worked, and here's why it worked, or, here's why it didn't work.” I think that could be really helpful, especially when developing a product. People would like to have faith in their models before they use them in some way. So I think that could be a good...

Max: Yes. You're right about that. All right. So you actually have a pretty big presence on the web and on YouTube, I don't know if that's—when you got into mathematics, I was gonna say, I don't know if that's—how common that is for mathematicians. Did you always want to try to popularize what you're doing, or did you just happen to kind of fall into it?

Tai-Danae: Yes, I didn't. I guess it was more of falling into it. I certainly did not start a blog to become popular. That was not a goal. I started it because I wanted to understand what I was doing, or what I was learning at the time. So I started the blog. I think my first year in graduate school, when you're studying for these qualifying exams, these exams that are—they're really like final exams, so maybe not such a big deal. But also it's a little bit of a big deal because if you don't pass them enough times, you get kicked out of the program. So like no pressure, but also it's good if you pass them. 

So I was studying for these exams. Now, some people learn well by—I don't know, different ways. Some people are like visual learners, others are auditory. Other people do things by learning things by action. I learn best—I learned early on—I learned best by writing. So when I was studying, I would have to—you know, Max…I would look at the definition of a topology. And I'd say, “What in the world? I do not understand. I do not understand this. This is not—what? Is this English? I don't know.” 

Max: So honestly, I saw it, probably as an undergrad. It took me like years to come back to it. I was scared of it.

Tai-Danae: Yes, yes. I understand this feeling of being scared by the math, I totally get that. So what I would have to do to overcome that is to just sort of figure things out like, “Okay, what did they really mean? How does this apply? How does this give rise to all the cool stuff I've heard about? Why is this the right definition? Why is it that if I have this definition, then I can say this other thing called a theorem, and it has—it's like why?” So I would have to fight through that dense fog of jargon and formality to make sense of the math for me personally, so that I can understand and do well in school. 

So just by nature of that, I would start to accumulate these little mini expositions. I fight through this, and then understand it. “Oh, that's why the definition is blah, blah, blah, Oh, that's why the proof used this trick. Now I get it.” And then I think, “Why didn't they just say that the first time? It's so easy now. It’s obvious now.” So I wish I had heard it that way the first time. So I would write it out the way I wanted to be taught. 

So I would just collect all these little expositions. And so eventually I decided, “Oh, hey, maybe other students are also in the same boat as me.” I have done the work already anyway. Maybe I'll just put it online and see what happens. And I was a little afraid. First, I thought, “Oh, my mom's the only one that's going to read this blog.” She's not even a mathematician. So I probably shouldn't do this. So I was a little bit nervous, to be honest. Every time I post a blog post, I get really scared. But anyway, I did it. And then I started to realize, “Oh, people are actually finding this helpful.” I guess these explanations aren't too simple. I was afraid maybe it'd be too easy to understand or too simple. And so when I saw it was helpful, I decided, “I guess I can keep on going, keep doing this.” And so that's how it happened. 

Max: Yes, that’s very cool. It's great to hear about that. I think I first saw you on The Infinite Series. The PBS thing.

Tai-Danae: Yes, great. 

Max: Oh, yes. That was like—it was like, whoa. But I don't remember what you're talking about on that one. But I remember it was one of the coolest videos in a while.

Tai-Danae: Thank you, Max.

Max: So I think I'll try to find that. And then I was excited to see—oh, and then you're doing topology, and then you're doing machine learning. Like, “I got to talk to Tai-Danae.” Okay, so this has been awesome. We're about ready to wrap up. Let us know if you have any last thoughts on today's discussion? And also, where can people, who are interested, find you online? All of this will be posted on the show notes page.

Tai-Danae: Great. Well, thank you, Max, for reaching out to me. I appreciate it and for all of your great questions. So people can find me online in a few places. I write about mathematics at a website called Math3ma. But it's spelled a little differently. So m-a-t-h-3-m-a.com, math3ma.com. You can find me there. I'm also on Twitter, Instagram, Facebook, @mathema, m-a-t-h-3-m-a.

Max: Okay, great. I sort of pronounce it like math-3-ma dot com, just to make sure people can type it but math3ma. I get it. It's like…

Tai-Danae: Exactly. Exactly. The word mathema is a Greek word, which essentially means a lesson. Turns out the domain name mathema with an E was taken by like some Italian company. And so I had to change the e to a three.

Max: All right, all right. Well, Tai-Danae, thank you so much for coming on the show today.

Tai-Danae: Thank you, Max. 

Max: All right. I hope you enjoyed listening to that conversation as much as I did. Having the conversation, everybody, particularly here in the United States, enjoy your Thanksgiving. Really one of the best holidays and best time of year. And it's been a tough year. But I think we're gonna turn the corner in a lot of ways. I've been thinking a lot about what The Local Maximum looks like for the year 2021. And I'll be considering that for the next month. 

So for that reason, I'm hoping to have Aaron back on the show next week. And we're going to have a discussion on how we can interact with you, the audience more and get all the great episodes you want back out to you. Have a great week everyone. 

That's the show. Remember to check out the website at localmaxradio.com. If you want to contact me, the host or ask a question that I can answer on the show. Send an email to localmaxradio@gmail.com The show is available on iTunes, SoundCloud, Stitcher, and more. If you want to keep up, remember to subscribe to The Local Maximum on one of these platforms and follow my Twitter account @maxsklar. Have a great week.

Episode 147 - Joining Locals, Media Decline, and Digital Detox

Episode 147 - Joining Locals, Media Decline, and Digital Detox

Episode 145 - What Is Probability? A Philosophical Question with Practical Implications

Episode 145 - What Is Probability? A Philosophical Question with Practical Implications