Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

Episode 192 - What is Intelligence

Episode 192 - What is Intelligence

In the context of artificial intelligence and machine learning, today's episode focused on the nature of intelligence, and how it related to concepts like understanding, consciousness, and programmability. The show also opens with a discussion of the explainability of in AI, and research on toxic language models.

Links

BBN Times: Is Artificial Intelligence a Distant Dream?
DeepMind: Toxic Language Models
Wikipedia: Intelligence

 
5ogvwc.jpg
 

Related Episodes

Episode 149 on Woke AI at Google
Episode 148 with Lisa Palmer on Augmented Intelligence
Episode 21 on Probability, Belief, and Truth
Episode 18 on the MIT AI going Psychopathic

Transcript

Max Sklar: You're listening to The Local Maximum, Episode 192. 

Time to expand your perspective. Welcome to The Local Maximum. Now here's your host, Max Sklar. 

Max Sklar: Welcome, everyone! Welcome. You have reached another Local Maximum. Today, we're going to return to another artificial intelligence update. I know that some of you are here for the current event stuff — maybe some of the controversial stuff. Although, we do have some controversial stuff today. But AI — it's not just interesting, it's going to come up again and again, and it has come up again and again. That's kind of my topic. 

There have been a few articles that I found interesting that have come across my desk lately, and I want to go into them a little bit. But first, next week, I'm really excited next week. I've got some big news for you, I will. If you want the news early, you can always join our Locals at maximum.locals.com and ask me anything. Both me and Aaron are on there. But both Aaron and myself are on there. This episode is not on video, but we have a pretty cool studio set up for the video. Get ready for that. Lots of discussions next week. 

But today, we're going to go into AI, and I want to start with a recent article in the BBN times called “Is Explainable AI A Distant Dream?” The author is talking about the need for AI to explain itself when it's making decisions. 

Now, sometimes we make decisions, we don't quite know why we're doing it. It's the same with machines. Maybe a statistical model, so the machine doesn't know exactly where — if you dig down into the statistics, you don't know the story of what exactly caused the machine to do something is kind of murky. If you want to invest in an explainable version of AI — well, you have to invest in it. You have to actually design the system to be explainable, which is a little bit more difficult. But as this article suggests, it’s also very useful. It also has ramifications in terms of, “Okay, is this a statistical trick, or is this a model that actually understands what's going on?” That's what I like to talk about a lot. 

First of all, one of the reasons why the author says that we need explainable AI is we want to know why was a mistake made because we humans can often find error in our reasoning. For example, if you are hearing an argument from someone, it could be all the way from mathematical proof to just an argument in terms of, “This is why we should do this.” 

Well, you can also often have a chain of reasoning. What we do is as humans, we can look for an error in that chain of reasoning as to why we're doing this. If you're using a statistical model, you could say “Yes. If we do this, it's statistically most likely.” But we don't really have the chain of reasoning to fall back on. There's kind of a difference between, again, regurgitation, statistical tricks, and understanding. 

I'm just going to read a clip from this text to get an idea: 

“A hospital has a neural network or a black box AI model in place to diagnose a brain ailment in a patient. The intelligent system is trained to find data patterns from past records and the patient's existing medical papers. Using predictive analysis, if the model forecasts that the subject is vulnerable to brain-related diseases in the future, then the reasons behind the prediction may usually not be 100 percent clear.” 

The article is mostly talking about problems in machine learning where you're trying to predict something. I often see AI as kind of a broadening of machine learning and we're going to get into that in a minute because AI is generally, “How do I generate intelligence?” And machine learning is, “How do I generate learning?” Now, learning is a big part of intelligence, but perhaps you can have an intelligent system that is not doing learning. 

We'll talk about that towards the end of today when we're going to talk about what is intelligence — which I want to get to. Really interesting question on what is intelligence. We have competing definitions, we don't have a definition that's kind of agreed-upon which is kind of mind-blowing if you think about it. You talk about intelligence all the time. 

The article here, though, about explainable AI is it gives four reasons why you might want AI to be explainable. The first is a kind of accountability. I guess that's not necessarily — it could be holding someone accountable. But I think more in terms of being able to tell what's going on. That gives us greater control because then we could ask for “what are we really trying to solve” and make sure we're trying to solve the right problem. If we could go in and see what it's doing, we can see if we're getting off track from that. It helps us do improvements. 

Number three, because if you can look inside of it and see where it went wrong, then you can build something in to fix it. Or, if the algorithm gives you a result you don't like, but you think it's an accurate result, then you could go in and see “how can I change my input” to “give me a result that I do like.”

An example I'm thinking of is, let's say you have a statistical algorithm to determine whether or not — let's say it's generating your life expectancy, and you put in some things, and you say, “Okay, here are some things about my personal lifestyle. What kind of exercise do you do? What kind of diet? Do you get your drink? Do you smoke?” All that stuff.

If it could give you some kind of quantification of not just “here's how long you're likely to live”, but “what's the one thing that I can do that can increase that by the most.” Then, that's really helpful I think in a lot of situations, not just health history issues, but the health one is where you start. 

Fourthly, he talks about the idea that you could use this to make new discoveries. If you can actually audit what the algorithm is doing, you could see maybe it has come up with solutions to other also interesting, important problems in the course of doing its work. In the brain health problem, we talked about maybe there is some — I don't know what's called — when you have other problems that correlate with this problem, maybe internally, it's also predicted those other problems. Maybe, that's going to be very useful in that. The ability to not just solve the problem that it's solving, but maybe it's solved a bunch of subproblems. We can go in there and see what's going out. 

The author also notes that many organizations don't want to be transparent externally. They don't actually even if they — first of all, if they were to make their AI or machine learning algorithms explainable, then there'd be a lot of pressure to open that up to regulators, or to governments, or to have people asking, “Well, what's going on in there?” They don't want to be transparent externally because first of all, they can get their intellectual property copied. If they're using this to build a product, maybe someone could use that information to build the same product. 

For example, if you're trying to do automated trading, then obviously if that kind of gets out there, and someone copies that, then your entire competitive advantage goes away. We can also be opened up to lawsuits, which kind of makes you ask, “What kind of shady thing are they doing to begin with?”

On the other hand, a lot of these lawsuits can be quite frivolous. Think about it, if it's an IP lawsuit, if it turns out that your AI is doing something internally that someone claims they have a patent on — some patent troll goes after you, that could happen as well. Or, they can open up a lawsuit discrimination even if their AI is not being discriminatory. Now, all of a sudden, if the internal auditability of your algorithm is out there, then you have a lot of attack vectors for people to at least claim that it's discriminatory, and kind of slap you with a lawsuit. 

All of those things are certainly problems that people think about. Unfortunately or fortunately, the ends of the article calls for regulation in AI, which I kind of feel like the calls for regulation of — quote “AI” — are sort of meaningless because if you think about machine learning, it's just computation. It’s just statistics. The way I see it, “regulation of computation”, “regulation of mathematical formula” is meaningless. First of all, just in terms of, “I should be able to compute anything I damn please — just like freedom of speech.” But I feel like freedom of computation is even more absolute than freedom is free speech. If I'm just in my apartment computing something, nobody should be able to go after that. First of all, “how are you going to actually enforce that” is a really important question. 

You're not really regulating intelligence — that sounds kind of crazy. Perhaps they'll try, perhaps it will come, like it or not. But I think it will come not in the form of saying “this is what you can and can't compute”, but in the form of saying “this is what you can and can't do in terms of the decisions you're allowed to make based on these computations.” 

Now, those regulations are already there. In terms of insurance, for example, you're not allowed to discriminate against based on certain things. On some sense, I do kind of feel like every freaking article ends with a call for regulation. You kind of need it to be blessed by the powers that be and get shared by the right people, and it's never very specific. It's not like we need this specific regulation. That's my criticism here. But hey, that's just everywhere. I read a ton of articles about that. 

Otherwise, this article makes some great points about explainability — we'll get to in a minute as I'll continue to get to in my research. I think that the next wave — the next boundary of innovation in terms of artificial intelligence, that has been ignored by the era of Big Data, the era of Google that we've been in is explainability and understanding; not just explainability, but understanding and causality. I'm not saying that Google does none of that. But I feel like the focus in the Big Data era has just been gathering terabytes — petabytes of data, and using it to, for example, serve ads, and not really understanding on a very deep level how people feel about those ads. 

 You've all observed that  — you got an ad for something you already bought, or you got an ad for something you mentioned, but you have no use for it. It's not very smart, but it has scale, and it has so much scale that it makes so much money, and they care about it a lot. If they built something that actually understood you in a way that these broad statistical models do not, then they could actually match you up with a product that could make your life better more easily. Maybe it won't come in the form of ads, but I'm just saying that in terms of the under investments, I feel have been in causality and understanding. That's where the openings are in the future for some big breakthroughs, I believe. 

Secondly, the second article that came across is an article that was posted on archive. It's actually not an article, it's an academic paper — but it's from DeepMind. You can get there deepmind.com and it's an AI research organization. The paper is entitled “Challenges in Detoxifying Language Models”.

Yes, it is sort of social justice-y research, perhaps pushing an agenda. There has been so much research and effort into figuring out this the — quote-unquote, “problematic aspects” — of every language model. But look, I'm not going to try to read this article and get outraged at you, don't worry. Let's try to actually dive in and see what they might be trying to say. 

But I do feel this is such an obsession of the time. Sometimes, things are overfunded, and perhaps, too much AI mindspace is being used in terms of finding a bad word in a language model, and they're not working on things like “medical research” and “supply chains” and — I don't know — stuff that we all need to keep us alive, and keep us healthy and happy. Maybe, that stuff's important too, and maybe we should research that. But I feel like half of AI researchers want to find bad words and language models. 

That being said, I said it. Let's go ahead and see what they're talking about. First of all, now, this actually reminds me of an episode that I did. We talked a lot about this stuff — going back pretty far. First of all, there was the episode about the whole kerfuffle — I don't like the word — kerfuffle. Chaos at Google, right? Yes. That's Episode 149 which was on chaos at Google, and that was around the research that they were doing at Google into this stuff, and how controversial was Google, and led to the firing of Timnit Gebru and all that.

Then, if you go back all the way to Episode 18 — way back in 2018 — Aaron and I talked about on episode 18. The title is “AI Gone Psychopathic”. What happened there was a group at MIT, they put a bot out there that learned from Twitter. Then, the bot became — it started tweeting horrible stuff because people tweeted at it horrible stuff to try to teach it that. 

Look, anyone who knows human nature know that’s what's going to happen. So, they pushed this out as MIT is some grand discovery that AI could be psychopathic when it starts repeating things that people put on it on Twitter. But that's what people think about now. 

Maybe too much research energy has gone into this problem. I don't know — and it's an obvious problem. The actual damage that it's done, I don't feel has been quantified that enough. Have we considered that it's too much? I don't know. What do you guys think of that? Email me at localmaxradio@gmail.com. Tell me what you think, or on our maximum.locals.com. 

What these guys at DeepMind were trying to do, they trained their machine learning algorithm — their language model, essentially, their deep language model, kind of a GPT. I don't know if they actually use GPT-3, but they trained it to avoid toxic conversations. 

They wanted to eliminate things that were anti-LGBT, or racist, or against sort of a group of people. They found that when they trained AI to avoid this, they actually were training their language model. I should say language model, not AI to be more specific. They train their language model to avoid all controversial topics — not all controversial topics, nut many controversial topics. They started to eliminate conversations they wanted to have. For example, if you are in a group that is kind of verbally maligned online, and when people malign your group, that's often reported as harassment. Then, their algorithm, not surprisingly, will just try to avoid talking about that group in particular. It’s like, “Okay, we don't want to talk about trans people. We don't want to talk —” 

I'm sure they don't want to talk about Jews. I'm sure they don't want to talk about — what was that episode that I did when Henry Abramson’s stuff got taken down from Twitter earlier. There was a human in the loop on that. A machine on the loop could do the same thing. 

Again, this is not surprising, but it also shows — I think illustrates — the difference between statistics between just maybe deep learning algorithms, maybe very smart machine learning algorithm, maybe very smart language model, but that just doesn't have the understanding and the causality all rolled up in there. It doesn't really understand what these phrases are trying to say. It sort of falls back on, “Let's just avoid these topics altogether.” 

Again, I thought a lot about this, and it's again — it’s understanding and causality is something that they don't want to… I feel like the solutions always involve, “Well, we have to give our AI teams more money.” Of course, more regulation; that's always the solution. Those are the only solutions invented — need for further research, which I don't disagree with. You'd have to keep on pushing for the research. 

I feel like what they're going to do is — next, they're going to try to build a compensating model, and then they'll find some problem with that. Then, they'll build another compensating model, and we'll find some problem with that. All while, essentially, building in their own biases about what's problematic and what's not, and then not actually building a machine that understands the words that it's reading and writing. That's the direction I would want to avoid if I wanted to put research dollars into that. 

I want to back up for a minute, and I want to ask a question, “What is Artificial Intelligence?” That gets asked to me a long time. I think I have a pretty good grasp on machine learning. “What is Artificial Intelligence” is not very well-defined because, first of all, the question of what is intelligence is not very well-defined. 

Here's what intelligence isn't. I think it's not consciousness, and it's not self-awareness, because that is one function of humanity — our subjective view of the world. It's not experiencing the world as well because I think one big difference between humans and machines is that we actually experienced the world. The machines, as far as we know, do not, and we can't really verify that. We don't know whether we can replicate human experiences in machines. We don't know whether we can replicate consciousness machines, and we don't really know enough about self-awareness. I think we can say, “Yes, we have been able to build some degree of intelligence into machines.” But consciousness, self-awareness, subjective experience that we either don't have, or we don't know if we have. 

We'll put that aside for a minute. One definition that I found online of intelligence is the ability to acquire knowledge and skills. That's an interesting definition. It's kind of an easy one, but there might be some problems with that. 

First of all, doesn't matter how easy it is for you to acquire new knowledge and skills. You have to be taught by someone else. Usually — yes, you have to be taught by experience. You have to learn those knowledge and skills. But then, computers can learn anything as long as they're programmed — computers that are Turing-complete, that is. That just means almost any machine is Turing-complete — well, not almost any machine, but any computer you get is Turing-complete. That just means that you can do any computation that you want on there. Might take a long time, but any sort of mathematical computation that you can think of you could do on that. 

For example, a calculator is not Turing-complete because you can't program checkers on your calculator, but you can easily program checkers and chess in your computer. You can program pretty much anything you want. 

By this definition, basically computers are Turing-complete — can be programmed to do anything, so they can acquire knowledge and skills as long as they have a programmer. But that's not really what we're talking about because now the programmer is leading here. 

Then, of course, then you back up, and you're like, “Well, then all matter is intelligent because matter can be made into computers, and then computers can be programmed.” Maybe it's something about the ease of acquiring these new skills. We kind of feel like if you can acquire new skills, and build skills on skills more easily than just a blank command prompt, then perhaps you have the ability to acquire knowledge. 

Also, all humans have the ability to acquire knowledge and skills — and maybe it's a pretty good definition for human intelligence. But it's not a very good definition for machine intelligence. That being said, I kind of feel I had divided up into different — well, there's lots of different kinds of intelligence, but I feel like there are three major categories that I'm often thinking about. 

The first is kind of the natural intelligence — which is you and I. I think there's some intelligence in animals and whatever. In other words, the intelligence that has been created by the natural world in the form of us people; it's using your brain. The second one is artificial intelligence. That's the one we're talking about today. Can we build an intelligence using our natural intelligence — using our brain? Can we build artificial intelligence? There's kind of a third one that I think is related to the other two, but should get an honorable mention. So maybe this is not an exact, but I feel like there is an emergent kind of intelligence. 

For example, the economy as a whole has a certain degree of intelligence where the things that get done need to get done even though no one person or no one node actually has all of the information available. Now, the economy is made up of many natural intelligences, and also many artificial intelligences. 

There's AIs that are running our apps, and running our algorithms and also, allocating our capital, believe it or not. So huge, huge amounts of money are being pushed around by AI's. But I feel like the emergent intelligence that comes out of that — the economy — is kind of another kind of intelligence. There's almost a superintelligence. It's almost a supercomputer, but it can't do some of the same things that humans can do, that a single human can do, or even single AI can do. Then, I kind of feel like the ecosystem as a whole is also intelligent. Even if you could say a lot of the plants and animals are not very intelligent, but through the course of evolution, how we've created, or a whole ecosystem has been built upon Earth, there's probably some degree of superintelligence in that as well. 

Now, it could be that our natural intelligence is an emergent intelligence. Our bodies create this emergent intelligence, our body is an ecosystem that creates a natural intelligence. Could be a lot of AI's are also emergent intelligence. Lots of different subsystems — maybe you can argue that. But it does feel like there's a difference between natural artificial, and then the emergent ones I'm talking about, which is the economy and the ecosystem. I'd like to get your thoughts on that.

There's something unsatisfying about the ability to acquire knowledge and skills of the definition. Let's see — let's go into further definitions. I have one from the American Psychological Association here from the Wikipedia article — I know Wikipedia, but we're going to go into the Wikipedia article today. 

The one that the APA writes is: 

“Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria.”

It sounds like they really don't know what intelligence is. They kind of have an idea. They kind of have — it's one of those things where it's like, “I know it when I see it.” Particularly since the definition is very much reliant on, “Well, it could be different. It could be this, it could be that. We're not really eliminating very much.” But I like to think the ability to understand complex ideas. In other words, there is some idea of complexity there, and then adapt effectively to the environment — that's a good one — and to learn from experience, engage in reasoning, and overcome obstacles. I think those are really important ingredients of intelligence. 

Now, I've actually heard it. Given more simply, it's the ability to perform well over a variety of environments. It kind of makes sense, again with the American Psychological Association talking about complexity: if you perform well at a task that's too simple to be considered intelligence, you're missing the variety apart. You're not going to be good at a variety of different environments, and you're not really open to complex ideas if what you're doing is very simple. I think it also allows for a narrow AI, which is — narrow AI is artificial intelligence that's only in a very specific case.

It's not too much variety, but it's a little bit variety. That's sort of what I think of. A good example of narrow AI would be maybe the chess-playing computers — the chess-playing algorithms. Okay, is “good at chess’ is good in a variety of different environments? Well, it's not good in environments that are not chess. But chess has an enormous amount of different potential board game configurations. The fact that it can perform well under many different board game configurations, and many different strategies by the opponent, well, that is a sort of impressive wide variety of environments. Maybe not impressive as a human, it could be smarter than a human as… It can perform better than a human at chess, but it also doesn't perform in the variety of environments that a human does. 

That's why it could be called a narrow AI, but still AI because it's not so simple. These definitions, they do kind of allow for different types of intelligence for sure because if you talk about a wide variety of environments or complex ideas, it doesn't mean that you have to be performed well in all environments.

If you think all environments in the universe — basically right now, if any of us were taken anywhere but Earth, and even most of the places on Earth, they probably just not survive for very long. Even if you perform well in a wide variety of environments, it means you could be good at a million situations — no, you could be good in one in a million situations. But there are so many possible situations that your variety is very high. Someone else, they could be good in different situations — possibly they overlap with yours, but probably they don't always overlap with yours. Even though it's a very, very small slither of — quote-unquote, “all the situations in the universe” —- which again, is a term that kind of hurts my brain a little bit. 

Another thing to consider here is we usually consider mental performance and mental learning as part of intelligence. But maybe it's not. They say that your muscles learn as you work out. In fact, it turns out that in robotics, if you look at some of those Boston Dynamics bots, a lot of the research comes into how do you teach a bot how to learn to move effectively. That's kind of intelligence in and of itself, but I guess we don't consider it our mental intelligence —  we could put it under physical intelligence, but it does seem like it's intelligence. 

However, all of these definitions do kind of — when you talk about ability to perform well over a variety of environments, that kind of ignores, actually understanding what you're doing, and maybe the one from the American Psychological Association is better in that regard because it says, “it starts with the ability to understand complex ideas”. Then again, you might say, “If you do understand, that's the point of understanding. You could then perform well in a wider variety of situations because if you don't understand, then you can't.” 

For example, a large database can be very intelligent under some definition because it has all the information you need, but it definitely doesn't have understanding or reasoning. You might say, “Hey! Who cares? It works. I got a database that makes my software look really intelligent.” Well, that's great, and that could be the best solution in your case. But without understanding, it won't really scale or generalize, so you won't really get the general AI that we're all after. 

Some other markers that I think of intelligence are the ability to think abstractly. Maybe the ability to think in non-absolutes — that is another area of research that I find very interesting, and it's something that's very tough for machines and very easy for humans to do. We can kind of grasp concepts that have a lot of fuzzy areas. Whereas, when you translate that into machine language, it's tough to tell it “Hey. Yadda, yadda, yadda — fill in the details.” Humans are very, very good at that. 

Also, the ability I think to follow along — arguments is a good marker of intelligence. Machines are exponentially better at following long arguments so long as the arguments are in absolutes. For example, if you want to build a software that checks proofs, you can build that, and it'll be able to check mathematical proofs if you want to build software that checks the blockchain. That's what all blockchain is based off that says Bitcoin is based off of. Essentially, the blockchain is one long string of arguments over what happened, and it's very, very easy to verify that via machine. Humans can't really do that effectively. 

But try asking your Amazon Echo, your Alexa for something that's reasonably complex. In contrapositives are always great. If you say, “Hey, Alexa! If I were five feet tall, would I be able to reach a 10-foot ceiling?” It won't be able to answer that. Another one is — so I always gave the example and I've given this example on Episode 21 when I talked about the philosophy of probability — how many molecules of water are in the ocean or in the Atlantic Ocean? And I actually did ask Alexa, and we got an answer. It actually told me: it's 4.7 x 10^46. But it appears that it really doesn't understand what I'm asking in terms of how many molecules of water are in the ocean. It just appears that it is copying someone else's answer, and not actually understanding the problem for itself. If it did understand the problem, then it would know that there is no integer that really works. Because the effects of evaporation, and where does the ocean start, and there's probably some quantum effects there as well. 

Do we have to think about that? Well, no. Someone came up with a very good estimate right here — which is 4.7 x 10^46 molecules of water in the ocean. I think that's the world’s oceans, not just the Atlantic Ocean. But in any case, otherwise, it would just be 10 to the 45th, instead of 10 to the 46 — but not much of a difference. 

I feel like to it would be tough to teach a machine like, “Hey, you could think of the ocean as being made of a set of particles, but you don't have to define exactly which are in and out. But if you did, the fuzziness would kind of cancel out.” That's a very hard thing for machines to understand. 

Okay, I'm going to just end with the Wikipedia article which gives some quotes on what intelligence is, and see what you guys think of it. So the first is from psychologist Linda Gottfredson. I think a lot — not the first, but actually I found this was the simplest and that one was “The ability to deal with cognitive complexity”.  I feel like that is not a bad one and it's simple, and it actually feels like it's from someone who has a good grasp of what intelligence is. 

Another couple of good ones that I found from the article where — here's another one,  “Judgment, otherwise called ‘good sense’, ‘practical sense’, ‘initiative’, the faculty of adapting one's self to circumstances ... auto-critique” — that's Alfred Binet. Another one is “The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.” Again, that word “environment” comes up — David Wechsler.

The environment means that you actually have to have some kind of external challenge that you're trying to overcome. In this case, it needs to act purposefully, which I think would apply only to natural intelligences because maybe artificial intelligence, or even emergent intelligence, you can argue, doesn't act purposefully. 

Emergent intelligence, you could think of it as like “invisible hand-like stuff” but ends artificial. It's like, “Well, I'm just following a program.” Maybe it doesn't work. But again, you have some very complex environment — you have to think about it, and you have to interact with it, and it's not just a simple algorithm. You have to actually deal with a lot of complexity, a lot of different situations. I feel like that's sort of where we're going here. 

There was one that I felt was the craziest one, the most “WTF” out there. That doesn't necessarily mean it's a poor one, but — I don't know what this guy's talking about. This is from Alexander Wissner-Gross. It says here, “Intelligence is a force, F, that acts so as to maximize future freedom of action. It acts to maximize future freedom of action, or keep options open, with some strength T, with the diversity of possible accessible futures, S, up to some future time horizon, τ. In short, intelligence doesn't like to get trapped.” I read that, I was like, “What?”

I think that what he's trying to say here is that intelligence is something — you can't really boil it down to “it's something that does Y when X happens”. It's something that's more open-ended, and very complicated, and is kind of open to newness, and even opens itself up to newness. That's — I don't know — it's an interesting one. I want to think about that a little bit more. 

What do you think? What do you think intelligence is? You could email me localmaxradio@gmail.com or remember to join our Locals at maximum.locals.com. It's been a pleasure trying to go through these thoughts and these ideas with you today. 

I think to sum it up, I would say that, first of all, the idea of defining intelligence is a lot more difficult than I thought, and there is a lot more that's been written about it than you can possibly imagine. It's hard to compare natural intelligence and artificial intelligence. Even if we do have a definition of intelligence, it's hard to compare them. But if you can't compare them, then do you really have a good definition? 

Finally, I think that understanding — I think that when it comes to artificial intelligence, understanding what's really going on… Actually understanding the environment, making hypotheses about the environment, and testing them is kind of the big frontier, and that's going to make our AI's and our algorithms more explainable. It's going to fix a lot of the language problems that we think about. 

But it's research that needs to be done. There's no guarantee that there's actually going to be good solution on it. But I would hope that there are better solutions than there are today. So I feel like that is the key to unlocking greater intelligence just so that we can have — I don't know — better software, better world, and also augmenting your own intelligence because, as I said before, I feel like we're kind of stuck in a rut, in the kind of age of Big Data, and maybe we need something better. 

Alright, next week, hopefully, have Aaron back here, and we are going to talk about all these changes that are taking place — all this exciting stuff. If you guys have been online, you might see… Or if you've been on the Locals, you might have seen a preview to what we did with the studio here.

We'll give you a lot more about what's going on. Stay tuned for that next week in Episode 193. All the show notes are in localmaxradio.com/192 to get all the articles that we talked about today. Both the article, the BBN times, the detoxifying language models paper, and the related episode, and Wikipedia intelligence — you can look that up yourself but I will still put it online: localmaxradio.com/192. 

Have a great week everyone!

That's the show! To support the Local Maximum, sign up for exclusive content and their online community at maximum.locals.com. The Local Maximum is available wherever podcasts are found. If you want to keep up, remember to subscribe on your podcast app. Also, check out the website with show notes and additional materials at localmaxradio.com. If you want to contact me, the host, send an email to localmaxradio@gmail.com. Have a great week!

Episode 193 - New Beginnings

Episode 193 - New Beginnings

Episode 191 - Tracking Big Trends with Ed McCormack

Episode 191 - Tracking Big Trends with Ed McCormack