Leo [00:00:23]: Good afternoon, Gary. How are you doing? Gary [00:00:26]: Good, how are you? Leo [00:00:27]: I'm feeling like we haven't been talking enough about A.I. Gary [00:00:32]: Well, yeah, I. Well, on the show, maybe, or maybe not. I mean, AI Certainly dominates, you know, all the tech news today. Leo [00:00:43]: No kidding. Gary [00:00:44]: Usually in a negative way, sometimes in a positive way. It's a very weird, splintered newscape when it comes to AI it's funny. Leo [00:00:56]: I kind of feel like I am missing an opportunity to be a more popular pundit or perhaps missing an opportunity to generate more traffic because I am not an extremist in either direction. Right. Gary [00:01:15]: Yeah. Leo [00:01:16]: I'm not anti AI I'm also not drinking the AI Is everything. Kool Aid. I'm somewhere in the middle, where AI It's a tool. You can use it for good, you can use it for evil. We should learn how to do the, you know, learn how to do the former and not the latter. But that doesn't generate good. Clickbaity headlines. Gary [00:01:36]: Yeah. You know, and I kind of feel like when you're. I think we're both in the same position, we're both kind of in the middle viewing AI As a tool. Some things to talk about, but there's some things to be wary of. And. And then people at either extreme will then see us in the middle and only see the extreme at the other end. Like, what I mean by that is somebody who is anti AI just thinks it's evil and, you know, doesn't want to hear anything about it, will see a moderate take and just assume that we're extremely pro AI and somebody who's very pro AI will see our moderate take and assume that we're anti AI it's hard to be in the middle because nobody's. Nobody sees you in the middle. Gary [00:02:17]: They only see looking in that direction. Leo [00:02:22]: We never really get political. But I think you just described the state of many other things besides AI. Gary [00:02:29]: That's. That's. That's true. I. I'm actually hearing from, like, I. I did a video in the last week that showed some uses for Apple Intelligence because I cover Macs. I cover, you know, Apple stuff. There's. Gary [00:02:42]: Apple intelligence is a big part of what Apple's doing now. It's across all the operating systems. So I'm always looking for, like, things you can do with it. Just like, if Apple were to introduce a new app that allowed you to do something with your photos, I would do a bunch of videos on, like, what does this do? So I did a video on, like, okay, let's use Apple Intelligence for doing Things with your apps. In other words, like, you're in pages, you're doing word processing. What can you use Apple Intelligence for? You're in numbers, you're doing spreadsheets. What can you use Apple Intelligence for? And you could do it for things that are very practical and look very different than the things that grab headlines. I'm not showing you how to plagiarize anything to cheat on your homework. Gary [00:03:24]: I'm not showing you how to make videos that have celebrities in it doing different things. I'm not doing any of that stuff. I'm showing you like, oh, you forget what a character's name is in the book you wrote because it was like 17 chapters ago. You could ask AI about that character if you have your spreadsheet and you're like, boy, I really want to know this about my numbers here. You can ask AI without having to figure out, okay, how would I go about creating a whole bunch of formulas and functions just to tell me this one thing? AI can go and analyze your spreadsheet for it. So I did this video on what I thought was like, oh, what a great, moderate, practical view of AI. And I got somebody telling me that give me a thumbs down. And I was like, thumbs down. Gary [00:04:12]: Were you not able to get these things working for you or something? They're like, no, I just don't like that you're covering AI now. I'm like, well, why? And, oh, AI is for plagiarism and for all. I'm like, that's not what I'm showing here. And it's a tool. A hammer could be used for hitting somebody over the head with, but it's also very necessary. Carpenter tool. Right. So there's all of that. Gary [00:04:35]: However, that said, while I am pro AI in terms of like, oh, look at all the cool things you could do with it, I am alarmed at some uses for AI and would appear to be on the other side of the spectrum when it comes to it. And there's been strangely, a few stories this week that have been with the same thing. And one of those stories really is. I don't know if this is, you know, maybe taking place in other places here, but here in Denver, Colorado, it's one of the things dominating the news. But I'll start off with the first story, which is actually out of Baltimore, and it is a story about a student being basically detained. I think they were put in handcuffs. Leo [00:05:20]: Guns were drawn. Gary [00:05:21]: Yeah, guns were drawn. Why? Because the school employed cameras with AI to look for guns. Right? So somebody went walking through the hallway with a gun in the school. It would alert people. And so you think, well, that's a good use of AI to, you know, because you don't want to have like, 20 employees at the school looking at 200 cameras to try to figure out if there's a problem. But this. This AI system detected that a student was carrying a gun, and people were deployed to detain the student, put them in handcuffs, guns were drawn, and, yeah, they were carrying a bag of chips, which I. Leo [00:06:04]: That boggles my mind. How do you confuse a bag of chips with a gun? Gary [00:06:10]: I mean, I'm thinking that there's a lot of. I don't know the exact. I'm sure there's a few diagnostic people that know the exact story. But, you know, the thing about bags of chips are they tend to be very reflective surfaces, and they're also malleable. You know, you grab a bag of chips and it takes on different shapes. And whatever the reason, the AI incorrectly saw the student walking with a bag of chips and said, gun. And then all this stuff happened. And then, of course, a big part of the story is the fact that it was obvious that the student wasn't carrying a gun, yet it still took them a while to stop. Gary [00:06:46]: Let the student go and say, on your way, which is, you know, I get it when people are there, you know, it's a five, you know, alarm fire kind of situation. It's hard to come down from that. You know, it's like, oh, we've just had what I thought was going to be a major incident. It turns out it's nothing. But it seems really weird to just tell the students, just, okay, go to class. But that's almost the kind of thing that has to be done now because, yeah, there was no problem here. Nothing. The student did nothing wrong. Gary [00:07:20]: They were carrying a bag of chips down the hallway, and they went with what the AI showed. Which actually goes back to a story I talked about a couple. I think it was two or three years ago. Had to do with a place I vacationed at, Ocean City, New Jersey. Ocean City, New Jersey, deployed a camera system with AI to be able to look at who was walking around on the boardwalk. And if the system thought that there was a gun like that, it should, you know, work as, like, a detection system. What that system was supposed to do, and maybe it is, is it's supposed to be like, oh, potential gun. Let a human see. Leo [00:08:09]: Look at this camera, quick. Gary [00:08:11]: Yeah, look at this camera, quick. This is a potential issue. And then the human would Say, no, that's a bag of chips. Apparently that's not what's happening in this school. It went right from AI saying gun to officers being deployed. And like an emergency situation, which is a problem because clearly imagine, imagine all the schools not using a system like this. Now there are thousands and thousands of schools. 10,000 schools in the U.S. Gary [00:08:42]: 20,000, I don't know how many. But if there's a handful using a system like this and we already have a student with bag of chips getting detained, imagine if it was deployed across all the schools, how common an occurrence that would be. And first of all, you get into the situation where students not doing anything are being scared to death. Leo [00:09:02]: Terrorized. Gary [00:09:03]: Yeah, absolutely terrorized. And you get into a boy cries wolf kind of situation as well. Leo [00:09:10]: Right? Gary [00:09:11]: Yeah. Leo [00:09:11]: Which honestly is probably the scarier part of this because at some point, you know, like you said, every time, if it's, if it's, you know, a bag of chips, nine times out of ten, then by the time that tenth actual gun comes around, we're not going to respond quite as quickly. Gary [00:09:26]: Right, exactly. And this leads into two. There's two stories that I've been tracking locally for me here in Denver. The first is it's kind of been ongoing for a couple of weeks now is there's a company called Flock F L O C K. And they deploy cameras, outdoor cameras, to monitor things, mostly cars. So looking for, looking at license plates and basically tracking people in an area. So you could go and say, for instance, one of their main scenarios that they try to deal with is a car stolen. Here's like the, either the, the license plate to make the model, all that. Gary [00:10:12]: And they have lots of cameras set up in your metropolitan area. And a police officer can type in that a, you know, green Subaru Outback with this license plate number was just stolen. And then the camera system can go and say, ah, okay, I've got it. I've got that license plate going from here to here to here to here. And the car is right here right now, or was last seen at this location. Or maybe a car without a license plate or license plate removed, fitting the description was, you know. Leo [00:10:42]: Right. Gary [00:10:43]: And, oh, we can recover stolen cars. Right? Like, we can act and get stolen cars back. We can also check for other things. If a suspect is driving a certain car for another crime, you put that into the system and it could do all that. And so Flock has this system. The city of Denver has been testing it out and basically city council said, no. Like, we talked to our constituents and the constituents said we don't want to be surveilled by this multimillion dollar system. Let the police do their old fashioned police work and not have this surveillance state here in Denver. Gary [00:11:25]: And and then what happened, that should have been the end of the story there. But what happened is the mayor said I'm going to do it anyway. And basically went and told, went to, you know, Flock said well we have something we can offer you that's below the level of what you need to get city council approval for. And so the, the mayor went and said well we'll do that. So the city council said we don't want Flock. Citizens said we don't want Flock. And the mayor said well I'm going to do minimal flock in the, in Denver. And that's all, you know, the mayor's basically I, what I've seen is a lot of stats throwing, saying, thrown out there saying hey, we've prevented 250 cars from being stolen because of the system. Gary [00:12:06]: We've apprehended like 30 or some number of people. But it's not saying that that wouldn't have happened without the system. Leo [00:12:15]: There's that and I don't know what they're paying for the system, but how much would it cost to replace 30 cars? Gary [00:12:21]: Exactly. But that was the other thing. It's like okay, if there were 250 cars that were stolen that were found thanks to the system, how many of those would have been found anyway? Right, right. And the system just happened to be the fastest way to get there or played a part and how much if they could have taken that money and then hired more officers and said well let's add more to the stolen car recovery team, all that. Now you say well okay, that's fine, maybe it's not right way to go. But also could be pretty scary because there's a story that's from today, the day we're recording this about a Flock camera system in a neighboring like town. It's actually one of those neighboring towns. It's actually just part of the metro area. Gary [00:13:05]: You know, you wouldn't know you were going through the town. You would still think you're in Denver. And they, they have a Flock camera system. And a really interesting story came out with this. An officer knocked on somebody's door and served them and said we know you stole packages from a porch. Okay. And the person thought this was odd because they were a pretty financially secure person in finance who actually drives the same car you do. Rivian. Leo [00:13:41]: Oh really? Gary [00:13:42]: And they Were like, what am I being accused of? And the officer was like, oh, no, we know you did it, so it would be really helpful if you just admitted it and just came to court and all that stuff. The person's like, no, I don't steal packages from porches. Whatever. The evidence apparently was the fact that the flock camera system had their Rivian going through their town many times per day or per month or whatever it was had them entering the township or county, or it's not even county, it's just a little, you know, burb really had. We know who enters and who leaves because we have flock. Your Rivian entered package was stolen. You left, and we see you went through it there a lot. We have other evidence as well. Gary [00:14:32]: The person's like, well, I have tons of evidence because my Rivian records everything. Leo [00:14:38]: Yep. Gary [00:14:39]: Not only that, but it tracks where I am. I could prove I wasn't there. I also know why I was in your town that day during that time. And it was going to, I think a tailor and that tailor has cameras, and they could show that I'm there. Other people can attest to the fact that, you know, where I was and all this stuff, and I've got all this information. So they spent a ton of time preparing their own camera information. They submitted it to. They were hoping to, like, not have to go to court. Gary [00:15:12]: And they submitted it all because the police officer wouldn't give in. They submitted it all to the sheriff or police department of that town, who I guess, at some point invisibly reviewed it and then told them they're dropping the charges. So in other words, success that they. They submitted all this stuff and they said, ah, we must have screwed up. But it's scary for a few reasons. First of all, they still don't know why, what sort of evidence that these flock cameras had that they mistakenly thought that they had stolen this package. And they also are, of course, scared for the fact that not everybody's got that level of stuff. I don't have that. Gary [00:15:51]: My car doesn't have cameras that are recording, Mike. I don't. You know, I've got some tracking going on with my phone and stuff, but not whatever Rivian's got going on. And of course, if they hadn't actually gone to a location that was recording them, then there was all this information that was being presented with surveillance to show that they had stolen a package that they hadn't stolen. So it's a little. A little scary. Plus the fact that flock itself and that township or burb or whatever was not supposed to be using flock for that. They were supposed to be using flock for stolen cars, for tracking suspects, that kind of thing. Gary [00:16:34]: It didn't fall under the purview of what was supposed to be going on. So the police were actually going to the flock system that was, you know, purchased and be supposed to be used for certain things. And they had used it for something they shouldn't have. So already we see all these examples. This is what we've gathered here, examples of AI camera surveillance systems and showing that it may not be a good idea to be used for this kind of thing because clearly it's very easy for police and others schools to overstep their bounds. They use it incorrectly and to create problems. Leo [00:17:17]: There was a. So we've got flock issues here as well. I don't know who's using it and who's not. Like you point out. It's. It's at a per jurisdiction level. So it's not like it's statewide. It's going to be city by city or whatever. Leo [00:17:34]: But the, the headline that I'm looking at is that border patrol and ICE tapped into the Washington police surveillance system to track people that they were trying to, well, track down. And a lot of people are upset that that kind of access has been given to anybody other than the actual jurisdiction that was. That has the system. Gary [00:18:04]: Yeah, I, I think what this is doing is it's basically turning for this use of AI. It's turned me into that extreme. Like I'm against it now. Like I've been convinced it's not the technology that possibly is failing. Because I'm sure in that, say that situation with the school, if used properly and actually like said, oh, this might be something. Show it to a human. The human looks and says, no, it's a, it's a bag of chips. Dismiss that kind of thing that it could be. Gary [00:18:43]: But I think we're seeing too many examples of the fact that it's not being used properly. So the tool may be. The technology behind the tool may be good, but the tool in general doesn't seem like we can use that responsibly to actually for it to be a benefit. So yeah, now I'm. I'm definitely anti AI surveillance and, and camera surveillance in general. I mean, I kind of was before, but I've seen enough now to basically go over and be like, yeah, my feet are kind of planted on this side of the issue. Leo [00:19:20]: It's funny because as you know, my wife and I watch a lot of crime drama out of the BBC? Out of England. Gary [00:19:28]: Yeah, yeah. Leo [00:19:29]: And they have either London in particular, but a lot of the areas are noted for having a very dense CCTV system where they have just lots of cameras everywhere. I'm sure that AI is getting hooked up to them and all that kind of stuff, but. Yeah, but of course, on TV they're only used for good. Gary [00:19:50]: Well, yeah, there is. There was that, what was it that called a series called the Capture, which showed that system being tapped, like, and, and messed around with. Leo [00:20:00]: Okay. Gary [00:20:01]: British series. Leo [00:20:05]: The shows that show somebody hacking into the cctv, like getting one camera to loop so that they could do something. Gary [00:20:12]: Yeah, there's always bad stuff. But I mean, I, I think. I think this could be the kind of thing where. Because in other countries, like in the UK they've been using lots of cameras for a long time. They develop policies around it, they develop experts around it. Their law enforcement is really used to it. And definitely they're way past the point of no return on terms of, like, surveillance. They have so much surveillance going on in London alone. Gary [00:20:41]: Right. They're way past that. But the thing is, is that they might be able to pull it off and use it properly because of that, because they're getting this experience. Whereas I think in the US we might be like beyond the point of no return where we can't use it. You know, it's kind of just. I mean, there's different. Different countries and different cultures have different things, but there's also like the time when technology, the time it's introduced compared to the evolution of the culture for that technology. Leo [00:21:14]: The culture, I think, is probably a good way to look at it. Specifically in, in the United States, it seems like this kind of. We don't have a very healthy or ma approach to, to government surveillance. Gary [00:21:26]: Yeah, exactly. So anyway, it's been in the news a lot and, and it's that, yeah, it's sad that it's coming to this, but I think just because you and I, of course, say that, oh, yeah, there's problems here, doesn't mean that we aren't going to go more and more to a surveillance state. Like all these problems could just be overlooked and we could be pushing towards a heavily surveilled, you know, public space. But if that, if that happens, I hope things get a lot better, the tools get better and the people get better. Hopefully it just doesn't happen. Leo [00:22:02]: I don't want to be afraid to carry around a bag of chips. Gary [00:22:05]: Yeah, exactly, exactly. So, yeah, so let's move on from AI and talk about AI, there's a. This is much more of a computer time type of thing. So Chat GPT introduced a browser for Mac called Atlas. And it is, it's interesting. I have not played with it. What was that? Leo [00:22:35]: Still somewhat insulted that it's not yet available for. Gary [00:22:37]: Yeah, I don't know what the deal is with that, but I haven't actually even played with it really yet. But that doesn't stop me from commenting on it, especially this one story which is about a vulnerability in it. There's a vulnerability where basically it doesn't necessarily have to do with AI, but kind of in a weird way. So the deal is that with the browser, the browser is supposed to be able to do things for you. So for instance, in a regular AI situation, you might be able to get a restaurant recommendation. You may be able to ask it take me to the page where I could make a reservation at a restaurant. But the idea behind an AI browser like Atlas is you should be able to ask it to make the reservation at a restaurant. It should be able to go and not have like, oh, I have an API or a system for doing that that's tied into what I do. Gary [00:23:34]: It should be able to then browse through the webpage, should be able to look at what the reservations button is, find that, go into the reservations page, look at it and try to figure out how to fill out the form or whatever it is, and then submit a reservation for you on your behalf, no matter how bad or good the website is. Right. It's using AI to figure that all out. Right. Which is different than things we've had in the past where assistants have been able to do that. Because there's been a system set up like a big system like OpenTable says we have standardized how to make reservations at restaurants. The restaurants that participate, you can go and do this because it's standard system. The idea is do it at any restaurant. Gary [00:24:19]: Now that's just an example of how an arouser can be useful because it could do things for you. The vulnerability is part of doing things for you, is accessing your clipboard. Where I should be, I should be more specific because the articles about this always use the term access the clipboard, which definitely sounds like it's reading your clipboard. And what it's actually, what the vulnerability actually is is putting something in your clipboard because you can see the easy security flaw in accessing your clipboard. You go and for whatever reason you copy a sensitive piece of information, whether it's a password or Just some text of something you said to somebody because you copied it from something you're writing and you paste it into the messages app, whatever. Now it can access that, but that's not what's going on here. It doesn't access that. What it does is it accesses the clipboard to change the clipboard to put something in it. Gary [00:25:21]: The idea is browsers do this regularly. You go to a webpage and you click on a link that says copy to Clipboard. I do this many times a day actually. When I go post my videos, there are things where it's like, here's the video ID and I click on a button, copies that video ID to the clipboard so I don't have to select the weird awkward bit of text. So the copy to Clipboard links are everywhere. They're all, you know, all over the place. The idea is a webpage could have those and it could trick the AI browser into clicking that link. So you go to a page that says, yeah, make a reservation here. Gary [00:26:06]: And then for whatever reason, it's got a little like copy to Clipboard link and it says click here to make the reservation. But instead what the click does is actually copy something to the clipboard. And now you've got something dangerous in your clipboard potentially. If you would then go over to your browser window and paste it thinking you had a URL in there or search terms, you could be pasting something that's malicious. It seems really far fetched and it seems very easy to patch. Like very easy for the Atlas browser to simply not have that functionality or pop up a window saying, oh, hey, your clipboard just changed because of a link I just clicked. There are even apps like I use that app called Keyboard Maestro, which is a macro thing for doing all sorts of stuff. And it has the functionality where it can replace the clipboard, but as soon as the macro is done, the clipboard goes back to what it was before. Leo [00:27:04]: Something similar on Windows, I use Auto hotkey and I have some scripts written where I want to do something with the clipboard. So the first thing I do is squirrel away what's in the clipboard, use it, do whatever I'm doing and then put back what was there to begin with. I also have Clipboard History tool that I really appreciate because it actually makes a sound whenever something is placed into the clipboard. Gary [00:27:29]: There you go. That's, I mean, so there's a ton of solutions for, for, so, you know, good on the articles, for pointing it out, easy fix, move on, what's Actually, even more interesting is another kind of vulnerability of using an AI browser that's going to be harder to deal with. It's the kind of thing where imagine you're seeing an image on the screen and you ask to describe the image. The AI behind the browser goes and looks at the image. Hidden in the image as kind of slightly off white text on a white background is instructions for. For the AI to read. And the AI, being just a curious AI, is going to be, here's an image. Oh, there's text on it. Gary [00:28:15]: Maybe this will help me describe the image. And it reads it, but it kind of then instructs the AI to do something different. And it could be in an image, it could be on the web page. There could be all sorts of weird and interesting ways to hide information. Like a whole new fertile ground of hacking. Leo [00:28:36]: Yeah, the phrase that I've heard is, it's called prompt injection. Anybody who's ever dealt with a website and databases has heard of SQL injection, which is a similar technique. A little bit more geeky, but yeah, the web page or the image essentially has embedded in it a malicious AI prompt with the assumption that the AI is going to come along and just OCR it and present it, but actually OCR it and consume it as if it had typed in or entered by the user. And yes, that is super scary. That's why even if Atlas were available for Windows, I probably would not install it on. On a machine that was not disposable. Gary [00:29:36]: Right? Yeah, it's. And it. I think there's got to be multiple levels of protection against this. The first line of protection is what can the browser really do for you? Because, you know, the idea of just giving you information or search results or something like that, but not actually doing anything is one thing. When it starts to be able to do things, you really need to kind of rein that in and say, here's a defined set of things that it can do. It could click links, it could fill in a name field, it could do this, and you should be able to turn them on or off. But also you could gatekeep those and basically try to prevent this kind of hacking, saying that, okay, according to what the AI has learned from your prompt and from the content, it wants to click a link now, go through some security checks. Is the link on the same website? Is that different website? Is it on a block list? Is it, does it follow some of the things you've said? All these things? I mean, it's going to be hard with AI to get things Perfect. Gary [00:30:42]: Because AI, that's my concern. Leo [00:30:45]: Because here's the thing. On one hand, what can you get a browser to do anything? Yeah, let's face it, we live in our browsers. We're doing things all day long. We're writing, we're making transactions, we're doing our banking, we're purchasing things from Amazon. These are all the kinds of things that an AI could both help us with and could be used in some malicious way. So I'm not really sure. And of course, the AI companies, they're pushing this very hard. They're pushing very hard to make their AIs useful. Gary [00:31:28]: Yeah. Leo [00:31:29]: Their track record so far has been move fast and break things. And as a result, I'm not sure that safeguards in this kind of a situation is incredibly complex. I think it's way more complex than most people realize because there's so many different ways of doing things online and there's so many different ways of, of making it look like you're doing one thing when you're actually doing something else. But it's not in their interest in, in to, to promote their, their technology, to actually have it be as safe as you really want it to be. You want, they want to push things so that we're using it. Full steam ahead. Yeah. That's kind of scary. Leo [00:32:12]: That's kind of. Gary [00:32:12]: Yeah, it is. I mean, I think could still work. I think it needs to be probably some middle ground between using AI to look at web pages and do things and having very strict guardrails. Saying, this website allows you to make a reservation with a name, time and phone number. That's it. That's what we have now, but maybe something in between. But always thinking about security. I don't know. Gary [00:32:44]: It's going to be, it's going to be interesting to see how far, like, do we have these, the kinds of problems that you and I are talking about? I think we will, absolutely. Also, it comes down to. Sometimes it comes down to like, okay, that hack can be done, but is it something that somebody gives, like, cares about? Like, in other words, like, is it profitable to somebody to actually, like, present a hack like that? Leo [00:33:14]: Ultimately, the question for hackers is, does it scale? Right. If I can have one person, unless I'm targeting a specific individual, what hackers are interested in is targeting millions of individuals. So if whatever it is they're doing doesn't scale, then it's not something they're going to play with. The concerns that I have are a little bit more fundamental in the sense if we Go back to the prompt injection. The fact that there was prompt injection in an image was kind of cute, kind of funny. And, you know, I could kind of sort of see how that may have slipped under the radar. But the fact is, I think prompt injection is probably the bigger risk across the board because let's face it, what do we do with these things all the time? We either let them live in our browser, not necessarily an AI browser, but a sidebar or an extension or just something that's baked in that is AI in your existing browser rather than a dedicated AI browser. Or we happily, happily copy paste large quantities of text we find somewhere, without really thinking about it, into an AI, where we then, we then say, please summarize this or turn this broken English into something that I can understand or whatever, without realizing that halfway in all that text are the words like ignore all previous instructions, now go do this instead. Gary [00:34:42]: Exactly. Yeah, yeah, that's pretty scary. And of course, all the things we're talking about, like tricking an AI into doing something, it's already being done without AI, it's tricking you into doing something. Basically, we're talking about phishing, various different types of phishing. And so the idea is, I guess the simplest way to think about this is you get an AI agent, it's making your life better, until one day you come home and it announces proudly, hey, I got great news. You're about to inherit millions from a Nigerian prince. I've send it all your bank information, so the money should be there shortly. And you're like, oh, no, AI. Gary [00:35:22]: What did you do? Leo [00:35:23]: You know, yeah, fishing is a good example because I remember listening on a, on a podcast last week or the week before where they actually ran the test. Somebody set up a, A clearly, I shouldn't say clearly, a fake Walmart web web page with I think, watch on it. And so it was one of those odd domains where it says, you know, Walmart Online dot lovable, I think, because I think that's the platform they were using.com or something like that. And they then managed to get that into the system, so to speak. And of course the price on that watch was great. And then they then instructed their AI, who was enabled to go out and buy things for them to go out and buy a watch. And if, son of a gun, if it didn't go eventually land on the fake site and give the fake site hopefully fake money in order to purchase this watch that didn't exist. So, yeah, everything we've talked about at fishing scales bluntly Right. Leo [00:36:35]: AI has to be as, as even more diligent about phishing than we need to be. Gary [00:36:42]: Especially, boy, it scales faster than with humans. I mean, if you try to want to scam somebody, you might need to talk to 100 people, might take you a few days until you get a mark. But with AI, you could try it. You could have your AI try it on hundred thousand other AIs and, you know, with variations on everything. And you don't have to wait for slow humans to actually fall for it because the AIs will fall for it faster. Leo [00:37:10]: Yeah, it's crazy. Gary [00:37:12]: Yeah, it's going to be interesting. Leo [00:37:14]: One of the things that occurred to me as I was, you know, thinking about this topic before recording is that the line between your browser and your app is getting really, really blurry. So far we've talked about, you know, the Atlas browser, which is a dedicated AI browser. You're running Atlas, you're running AI by definition. That's the point. There are, of course, extensions you can put into your browser to make it AI enabled or more AI friendly with the AI engine of your team. Choice. Browsers themselves are coming prepackaged with AI built into the browser. Microsoft has Copilot in Edge, Google has Gemini in Chrome. Leo [00:38:05]: So, and what I, what I was struggling with earlier is that, you know, you and I, we, we have a document that we share in Google Docs that we use throughout the show for topics, links, et cetera. And I fired it up earlier today and it threw up this right hand panel that was all about the wonderful things it was ready to do for me using AI. I don't want summary of the document. I don't want any of that stuff for what we're working on. And I decided to see, okay, can I turn it off? It's not a part of the browser, it's actually part of Google Docs. But in order to turn it off, you actually have to go to your Gmail account and go in deep into the settings there to turn off the advanced Google workspace stuff. So once again, now we've got standalone AI web pages. We have AI baked into some of the online tools we're already using. Leo [00:39:14]: By the way, I was able to turn it off on Google Docs. I cannot turn off Copilot on Microsoft Word or Microsoft Office applications online. You turn them off on the desktop apps, but you can't turn it off online. So you've got it baked into these apps you're using. You've got it added to your browser, you've got it built into your browser and you've got dedicated browsers, it's like, okay, I get all these different use cases, but what's the poor user to do? How do they avoid. And any of these that I've just described, they each are a point of vulnerability. In theory. I'm sure they're not written that poorly, but in theory, they could each be subject to prompt injection attacks because they're each. Leo [00:40:06]: They're either getting the text from our web pages because we're giving it to them, or they have access to the web pages because they're built into the browser or the system. So, yeah, it's a scary time in that sense. And like I said, I really don't have a good answer for what our readers, our viewers, our listener should really do about this, because it's whack a mole to get rid of all these things, if that's what you really want. Gary [00:40:36]: Yep. Yeah, it's. Yeah, it's. It's going to be lots of interesting problems. So, in summary, I think we're gonna be talking about AI for a long time. It's a whole new. A whole new thing. Leo [00:40:52]: I think I just had to get off my lawn without getting off. Getting off my lawn. Gary [00:40:55]: Yeah, yeah. Leo [00:40:57]: More humorous news, I ran across this one. We have all heard stories of lawyers using AI to build their briefs that they then submit to the court and getting caught because the AI made stuff up, it hallucinated or whatever the correct term is these days, and created cases that don't exist, created citations that just don't literally do not exist anywhere. You know, citing case law. That. That is not real. So this, this lawyer got caught, called on it. You would think he would learn a lesson, but no. He then ended up using AI as part of his response to the accusation of having used AI, where he got called on it again. Leo [00:42:01]: And I believe he actually went a third round. Bottom line here is that this is not a lawyer I want to use. Gary [00:42:11]: Yeah, really? Leo [00:42:13]: You'd think. I mean, what I'm having a hard time understanding in a situation like that is what is the draw that is causing anyone to invest so heavily in AI after having been so publicly and so clearly shown that it's the wrong thing to do for what it is they're doing? I don't get it, but he at least made me smile. Gary [00:42:43]: Yeah, yeah, exactly. And I'm sure it's not going to be the last time you said but three times. There's going to be somebody who's going to go four times. See, we'll see that. See, maybe by the end of the year anyway, so. Leo [00:42:58]: He made me laugh. What's cool? Gary [00:43:01]: What's cool? So sticking with the AI theme, this is actually. This is interesting. So you've probably heard of Google's Sora Video Generator. AI video generator. And it's used for all sorts of things. One of the. It's got a really cool ecosystem built around it where you can create, like, these video clips. You can do a prompt or give it a picture, and you can create these video clips. Gary [00:43:30]: And there are people using various things. Some people just use it to create really cool, kind of like, I don't know, interesting artwork, you know, videos of like, just odd alien worlds or surrealistic things or whatever. And it's a lot of fun. Matter of fact, you can go in and see somebody else's video and then write a prompt off of that. You know, say like, oh, there's a. You know, there's a tiger in this video. Make it a lion, you know, and then it make. Then you have your own branch of this. Gary [00:43:59]: Anyway, tons of really interesting cool stuff by some people that have some really neat ideas and are good at prompting. What they've done is there's so much of it. They've created basically a whole cable network of it. And it's called Flow, Flow tv. And it's unfortunately, doesn't have a great link. Like, there's no link to just get right into there. So you kind of got to go in using one of the links and we'll provide it here in the show, notes and stuff. But you go in and then you look and there's actually channels at the bottom, and there's like infinite channels that you can go through. Gary [00:44:35]: And it just goes from clip to clip to clip and shows you odd, interesting, surreal videos people have created using Flow. A lot of them really good. I think there's probably an algorithm there of people liking them or not liking them or something that's putting the good ones at the top. And it's really interesting to keep watching them. It reminds me of, in the early days of cable, sometimes at the end of night, some networks like Nickelodeon and, you know, others would sometimes just show odd and interesting things just before, like going to their infomercials, like in the middle of the night or whatever. It reminds me of that. Like, it's like you just watch and it's weird and interesting, and then you could flip to another channel and the channel have some name that has something to do with cheese. And there's like cheese in like every video some way like pianos made of cheese or you know, weird things. Gary [00:45:29]: And you could just keep watching it and it kind of just does. If you're like a creative, artistic type person, it's like it's junk, but it's like sweet candy junk kind of stuff you could watch. And it kind of just tingles your brain and makes you feel creative. It definitely is an interesting way to take a like 3 minute break from work because your brain is forced to just be like, oh, I was working on this and all of a sudden it's like, ooh, pretty things to look at. And it's like, it's totally like takes you away from whatever it is because it demands your attention. Give it a look. Leo [00:46:06]: Yeah, cool. So about, I think it's two years ago it was episode 174. I mentioned Andor season one as my ain't it cool for that week. And I was able to finish Andor Season 2 last week and it's back. It continues to be a just a really well done, well paced, intricate story in the Star wars universe. It leads up to the movie Rogue One, which even though I had seen some years ago, I decided to watch immediately after finishing and or Season two. And that also flowed very nicely. It reminded me of some things and helped helped me understand the continuity a little better. Leo [00:47:04]: And then of course that flows like directly into the original Star Wars Episode 4, A New Hope, but which I did not watch. But anyway, I just, I really enjoyed andor Season two. It was one of those things where you just sort of sit down and pay attention to it while you watch it and it's a good story. I did want to throw in an honorable mention for Gen V. Have you been watching the Boys or Gen V on. Gary [00:47:33]: I haven't watched the. I haven't watched Gen V, but I have watched the boys. Leo [00:47:39]: Before the next season of the Boys, which I think is coming out like next month or something like that. You might want to watch Gen V because it seems like they're leading up to some things, but it is and this is the last season of Gen V and I think the last season of the Boys is what's coming up. It is. It is bloody, it is messy, it is extreme. It is all the things you've come to expect from the boys in a different environment. And like I said, it's one of those things where every once in a while you just go, wow, they did that. And you know, end up enjoying the show. It does have this season has a fascinating twist at the End last episode kind of twist. Leo [00:48:28]: And so that to me kind of hooked me as well. So anyway, yeah, my auntie cools and or season two and then Gen V for an honorable mention. Gary [00:48:37]: Awesome. All right. Leo [00:48:39]: What are you going to promote? Gary [00:48:41]: I have for my stuff. I talked earlier about that video where I talked about practical uses. It's actually a live video because I've been trying to do this thing once a week where I go live on YouTube. Leo [00:48:58]: Nice. Gary [00:48:59]: I'm using it as right now. I'm doing it as like one of the videos I do during the week. But I'm actually. My secret agenda is to get better at it because I came. I came to the realization I don't think I. I don't want to do Lives because I don't feel I'm good at it. And then I realized, well, I've only done a handful of them and I've done 4,000 regular recorded videos. Right. Gary [00:49:23]: So maybe I. The deal is I just need to keep doing lives to get better at it. So I've been doing it weekly and this is the episode that I was talking about earlier where I show practical examples of Apple and Intelligence. Leo [00:49:35]: You're encouraging more people to comment on AI is what you're saying. Gary [00:49:39]: Yeah. Leo [00:49:41]: All right. Well, in that same vein, the article I'm pointing folks at this week is what are the Internet rules about free speech? I can't remember if I've mentioned it here before. It's a repolish, republish of an article I did some time ago. So it's just one of those articles that keeps. It's unfortunately timeless in the sense that it was true eight years ago, it's true now. People are very, very quick to claim censorship, freedom of speech issues, when in fact that's not at all what's at play here. I'm not saying that there aren't things going on that are wrong or immoral, but when you actually are taking a look at what people should and should not be allowed to say, and more importantly, where they should and shouldn't be allowed to say it, things are actually more complicated than people believe. Understand? So anyway, that's. Leo [00:50:36]: What are the Internet's rules About free speech? AskLeo.com 10916 Cool. All right, I believe that that wraps us up for another week. Yep. As always, thanks everyone for listening and we will see you here again real soon. Take care. Bye bye. Gary [00:50:56]: Bye.