Leo [00:00:27]: So Gary, were you right? Well, yeah, last week's episode, there was— you were making some predictions or possibilities for what was going to happen the next day. Gary [00:00:39]: I mean, I— we both talked about the fact that it was kind of seemed unlikely that Apple would choose the name MacBook Neo and not just call it a MacBook, but they did in fact, yes, call it the MacBook Neo, and So no, so far it seems to be really working for them. Like nobody's complaining about it and everybody's kind of grasping onto like the name and it's got, it's got a pretty positive response for lots of reasons. But yeah, I think it's one of these little maybe genius marketing things from Apple, um, that it's really going to, uh, it's gonna, it's gonna help this thing catch on. Leo [00:01:16]: Um, is it any good? Gary [00:01:17]: I mean, besides that, Well, so it is fascinating. Actually, I have— I've gone on a little bit of a journey myself in the last week just dealing with the news and all of that, because, you know, the news came out. Okay, let's take a look at what this MacBook Neo is. It is very low priced. Matter of fact, there's never been anything close to this in the MacBook line. The closest was that for a short time they had an M1 MacBook Air for $799. Leo [00:01:51]: This one is $599. Gary [00:01:53]: $599. Yeah. So that was for a short time in 2020, basically. It was a very meager machine, you know, uh, this, um, and even before that, if you want to go back to like, they had a Mac, they had a basically cheaply priced MacBook model you know, 15 years ago that was like meant for the same kind of thing. Like, here's the cheap MacBook. It's made of plastic, you know, it's got basic features. That thing was $1,099 when it first came out and they dropped it eventually to $999. So this is like a whole new level down from that. Gary [00:02:30]: They're like really trying to like, because this really hits in the middle of like iPad and tablet territory. Like, you can get an iPad cheaper, but if you want the iPad with the keyboard, you know, you're spending more than this, you know, right away. Leo [00:02:46]: Processors on the inside. Gary [00:02:48]: So on the inside, what makes it all possible is the A18 Pro processor, and that is the processor that's in the iPhone 16 Pro. Leo [00:02:59]: Interesting. Gary [00:03:00]: So, um, not big— no, no news to anybody that it could be a Mac because the very first developer Mac for Apple's processors before they had the processor ready, the developer processor was an iPhone processor much like this one. So they, so we already knew that macOS had been coded to go across kind of the line of processors that Apple was producing, but they only came out with Macs that had the M-level processors. So here they are coming out with one with the A processor, which does a few things. First, it's smaller. It's smaller than any M processor, but not by that much. But the other thing it does, because I was really trying to find out like why do they do this, the thing that's really big is energy consumption. This thing consumes about a quarter of the energy of the, like, the lowest energy-consuming Mac processor because it's meant to go into an iPhone. And Just doing that opened up the window to a whole bunch of stuff as I continued to investigate. Gary [00:04:04]: If you're consuming less energy, then you don't need as big a battery. You can cut back on the battery. Batteries are expensive. Suddenly, they could go and say, well, we're going to put less battery in there. It is a smaller battery, but because the processor is so power efficient, it will still last for 11 hours. Right? Now that the battery is smaller and the processor uses less power, we don't need to have this chunky big adapter. We could actually— this thing could run off a phone adapter, and it does. They're providing— they give you a 20-watt, the exact same adapter that you get with the iPhone or iPad. Leo [00:04:44]: Interesting. Gary [00:04:45]: So it's running off a phone adapter. They also went and said, you know, we have this whole rapid charging thing. Forget that. We're not putting all that stuff in here. Because it's got small battery anyway, and it's energy efficient anyway. So we don't need all of that. Then they went in and went further and they said, well, we're going to take out the backlight for the keyboard. We're going to take out the light sensor because we don't need the light sensor if you don't have a backlight, but it is used for screens, some screen stuff. Gary [00:05:11]: But the main thing was for the backlight. So we'll take the backlight light, the sensor, all that away. It starts cascading what you could take away from this to bring the price down, and you could see where they got the price down and turned this into a really interesting model that it's low enough that it's cheap. Educational pricing is $100 cheaper, so the base price for education is $499. They're also simplifying it in that there's only two models. There are no other options. You get the $499 or, sorry, the $599 or the $699 model. And the only difference is one has double the storage and Touch ID. Gary [00:05:55]: So a button, a fingerprint ID button. The only difference, they both come with 8 gigs of RAM and then one comes with 256 gigs storage, the other 512, making the, making the $699 one the winner. Like if you're, you just want a laptop, and but you don't need anything special, you know, like the other Macs are just too high-end for you and you just want, well, you know, what it'll do for me, the $699 model is great. If it's an educational kind of thing where it's like, oh, this is being used by a kid or this is being used in a classroom where other children will come and use it every day, you know, then you don't really need Touch ID. People aren't storing media libraries on it. Then suddenly the cheaper model becomes attractive. But for most people, it's the $699 or $599 for education pricing. Neo. Gary [00:06:45]: They do a bunch of other things like because there's no backlight on the keyboard, suddenly it frees them up to make the keyboard different for every model. So the actual colors you get, there's 4 different colors. The keys are actually those colors and Apple's never done that before. They've just been like, you know, you got a blue MacBook. Well, it's going to be black keys still. So the keys are different colors. It's still an aluminum chassis, so it's not— I was gonna ask, yeah. It's not plastic, it's aluminum. Gary [00:07:19]: So this thing's probably gonna feel really good. And what I'm hearing from a lot of people is a lot of people are gonna be getting their first Mac, it's gonna be this one. A lot of people have been moving along with an iPad or a tablet or whatever, and they feel that it's like, maybe I should be moving to a computer at some point. This is the one they're gonna get. Leo [00:07:43]: Interesting. Gary [00:07:43]: And I think it's going to sell a lot, and I think there's going to— it occurred to me within like 10 minutes of this thing coming out that I think a lot of people are going to be buying this as their first Mac, and I need to get one, and I need to start doing videos that are specific to this model. Leo [00:08:03]: Sure. Gary [00:08:05]: So yeah, yeah, the— well, the obvious stuff at first, like how can you edit video in iMovie with this?— like how, you know, answering the naysayers. Oh, you can't possibly edit video on this. Oh, you can't possibly— how big of an image can you open up? How big of a spreadsheet can you deal with? So, and I know that basically the A18 Pro processor has a very powerful, like, regular single core. It actually is like 2 full power cores in it. So for doing one-core tasks, like a lot of things that normal people do, just like interface stuff, you know, this is going to be as fast as an M4 Mac from last year. Right. Like you're not going to— you're going to see great performance from that. When you go to multiple cores, that's when it's going to definitely seem like it's a Mac from say 2020. Gary [00:08:57]: But even then, that's not that bad. So I really want to come out with like videos that show what the Neo can do. Then I want to come out with videos that are specific to like using the Neo for this, using Neo for that, for the things that those users that have bought it really want to do with it. So yeah, so I went and I grabbed one. So it should arrive tomorrow as well as the first ones coming out. And which one did you go for? The big one? Yeah, I went for the bigger one because, because yeah, the storage, it's like it's still got the same amount of memory and all of that. It's not going to make any— the Touch ID isn't going to change the performance, but I do want to be able to bring it out into the wild and like use it maybe possibly as my regular laptop replacing my M2 MacBook Air. So I want a Touch ID, but also because I'm going to be doing things on it like trying to show how you can edit video and all that, I am going to be using up space on the thing. Gary [00:09:53]: So I got the bigger one. I was tempted to get the smaller one just to be like, this is the low-end one and look what it can do. But I don't really see any anything different in performance unless you fill the drive to the brim, right? Then you'll see a hit in performance. But yeah, so, so yeah, and the blue one, the Indigo one, I guess, is the one, you know, it's just because you have to choose a color. So it's interesting. I, yeah, I can't wait to get it and see what it's like. I can't wait to start to do videos on it and really And I have a good feeling like this is going to expand the number of Mac users by a lot, and especially first-time Mac users. A lot of people that have had PCs and have just been, you know, they have to still have their PC from college or whatever, you know, but they have an iPhone, they moved to an iPhone a long time ago, are going to be like, oh, this is my chance to jump to a MacBook. Leo [00:10:51]: And I can, I can tell you that especially over on the YouTube channel where I'm talking about various issues with Windows and OneDrive and all that kind of stuff, um, there's a huge crowd of people that are saying, well, you know, you solve this by switching to Linux. Well, yeah, yeah, but this seems like another good escape route. Yeah, for people that are frustrated with Microsoft and Windows 11. Gary [00:11:19]: Yep. I've had to answer this question a bunch of times, which is really telling. People asking me, saying, "I've got an iPad now. So how is this different?" Which is really interesting because the way it's different is it runs macOS and Mac software. That's really important. When you're using a tablet, even though the iPadOS system has gotten a lot better, a lot better and having windows that you can move around and stuff like that. It's still, you don't have terminal, you don't have developer tools. There's all these apps that are just not available on the iPad. Gary [00:11:53]: All these things that you can't do. You can't drop down into terminal and type in Linux. You can't, there's a lot where you just need a computer. You don't need a fancy computer, but you do need to make the leap to a computer, which is really interesting because Apple's strategy for a while really looked like, okay, Macs are going to be our high-end computing device. We've got our mobile device, it's the iPhone, and for low-end, it's an iPad. Like, we want everybody to get an iPad. That's the future. Normal people have iPads. Gary [00:12:25]: If you create content, if you do like technical work of any kind, then you're going to have a Mac. But now the MacBook Neo looks like them basically saying that's not happening. And it hasn't, to be sure. It should have happened years ago if it was going to happen, but people kept buying Macs. And people— and the same thing on the Windows side— people kept buying laptops, right, and desktops and all that. It didn't— everybody didn't go to tablets. And the MacBook Neo is, is Apple saying, okay, we're, we're not We're not seeing that as happening ever. We're seeing that there is going to always be computers and tablets, and it's kind of like having cars. Gary [00:13:11]: The way I look at it is like SUVs didn't replace sedans. People started buying SUVs in droves, but now it's just one of the choices. You've got SUVs, you've got minivans, you've got sedans, you've got sports cars, you've got pickup trucks. They're all choices, none of them looks like it's going to dominate and replace all the others ever. That didn't happen. It just stabilized at you have these choices. And it's the same thing I think in computing. It stabilized in there are people that want computers, there are people that want tablets, there are people that are okay with just their phones, and nothing is pointing towards a future where one of those starts to dominate., like they thought. Gary [00:13:55]: And the MacBook Neo is Apple saying, okay, that being the case, here's our low-end MacBook. Like, this is our entry-level MacBook. Leo [00:14:02]: One other question that occurred to me, since it's got a different processor than the Mac. Yeah. Do Mac binaries run on it natively? Gary [00:14:13]: Yep. Leo [00:14:13]: Really? So it's got the same instruction set basically? Gary [00:14:15]: Yeah, it, it is the same instruction set. Leo [00:14:17]: Yeah. Gary [00:14:17]: Okay. It is. It, it's, it's the same. So, So yeah, there's no problems there. No conversion needed, no special versions, no like this won't work, that won't work. There may be occasionally things that happen anytime there's new processor. Like I could see weird things like odd plugins for audio software where it's like creating waveforms using like machine language that, you know, when it was like hitting the processor's GPU directly to do something, you know, but But in general, we've already gone from like the M1 to the M5, and there's already been like all these for, you know, the M1, the M1 Pro, the M1 Ultra, you know, that kind of thing. So you have all these variations that were already there. Gary [00:15:01]: And the A18 Pro is basically just another variation of it. It's still the same. Like I was looking at, it's like 3-nanometer process, which actually makes it better than an M1 processor, which was back in the 5-nanometer day. And also like the size is, you know, about the same as the M1. It's a little smaller, but it's just that energy efficiency, it just opens up so many interesting things that they were able to compromise on once they just went to that processor. I could see why they're doing it. Plus the fact that I have to think that they could just produce that specific processor really cheaply. Like, that was one of the things is like they had that down. Gary [00:15:49]: And so, so yeah, I'm excited. Tomorrow I'll get it. I'll start playing around with it and testing it out. But I expect it to— I mean, you know, having an M2 MacBook Air right now as a machine I use all the time and keep forgetting that I'm using it, By that, I mean, I've got an M1 Studio Ultra on my desk. It's got 64 cores, right? Or something like that. It's pretty powerful, even though it's still M1. It's tons of cores and it's got tons of memory. That's right, 64 gigs of RAM. Gary [00:16:20]: I forgot how many cores, 36 cores. There's a lot of cores in there. And the thing is, so I throw whatever I want at it and it handles just fine. I download some Steam game and I play it and I don't even think about like, Oh, will it support this? It just supports everything. I take my M2 MacBook Air with 8 gigs of RAM and a processor from 2021, and I sometimes I think, oh, I don't feel like going over to my office. I'll just do it on this thing. And I do it. And then it occurs to me later, it's like, hey, I was doing that thing on my M2 MacBook Air. Gary [00:16:53]: Should it have worked that well? I mean, that was, that was a little bit intense what I was asking you to do. And I like 15 apps open and, you know, browser tabs and everything, and it didn't seem to have any trouble. Leo [00:17:05]: So it's funny, the, um, so the Neo really does sound like, um, a perfect machine for, uh, my wife if her M1 ever dies. She's got a MacBook M1, um, and has had for a long time. Battery life continues to be awesome on the thing. She uses it off, off power multiple hours every day. And, you know, her needs are simple, right? She spends basically, she spends her life in the browser and you just don't need all that power behind a browser-based life. So that sounds almost ideal for, as you know, like I said, when the time comes. Gary [00:17:46]: Yeah, yeah. And, you know, another thing about this that I've pointed out in the few videos I've made is, you know, there are people that say, oh, I wish it had 16 gigs of RAM. Oh, I wish there was a 1TB hard drive option or something, or you had the option for a backlight. You know, I'll pay a little extra to have the backlight on the keyboard or whatever. And my answer is they have that. It's the MacBook Air. Like the MacBook Air is the up, like you, you know, you've got these two models of the MacBook Neo. The next step is the MacBook Air. Gary [00:18:18]: Like you don't have to, it's not like it's any, it's really that much bigger. They're about the same size, about the same weight. You know, you just, you could easily update to the MacBook Air and pay more and get more. Leo [00:18:29]: You want to pay more? Gary [00:18:31]: Yeah, it's there. You don't need something called a MacBook Neo that has the specs that already exist in the MacBook Air that exists as a different machine. So yeah, and the fact that people have been fawning over the color choices, you know, because usually you could tell, kind of, you can gauge kind of the temperature. That when people start talking about, like, arguing about, like, what color is best, you can tell that they've immediately gone past the specs and everything else. And, like, I'm sold. Now, is it citrus or indigo? You know? Yeah, so it's going to be interesting. It's exciting. It's kind of like, for me, I'm, like, really excited about that. Gary [00:19:13]: Yeah, I'm like, I might be seeing a lot of new of new people watching my videos because the Mac user base may be growing by a big step here over the rest of the year. Leo [00:19:26]: So yeah. Fun times. Yep. So from MacBooks, let's talk about Maltbook. Gary [00:19:33]: Maltbook, see, I wanna know what that is. Leo [00:19:37]: It is, of all things, a social media platform intended to be used by AI agents only, meaning that there are no humans, hmm, and it's just these AIs talking to one another. Now, the reason that this came up is that apparently Meta decided that they were going to buy Multibook. So Meta now owns both Facebook for people and Multibook for AIs. I have no idea the, the derivation of the name, but the whole concept, honestly, it confuses the hell out of me. Why is there a social media network for AI to communicate with other AIs? Um, this feels like the beginning of the apocalypse. Gary [00:20:40]: Well, yeah, a lot of that going around. Leo [00:20:42]: But, but I'm just— I'm not seeing the utility of it, and I was just kind of curious if, if, if you had any insight as to what the fork they're doing. Gary [00:20:55]: Yeah, that is strange. I mean, so, so tell me now, so is it the idea that, say, like ChatGPT would be on there once as ChatGPT, or would it be the kind of thing where there could be 4,000 AIs on there that are being run on ChatGPT? Leo [00:21:16]: It's closer to the latter because it's not just the LLMs, right? You actually end up creating an agent of some sort, and an agent— ChatGPT by itself can't do things, right? It can only provide you with information. Okay, agents will use things like ChatGPT or other LLMs and AI backends to actually take action. So an agent, for example, might be the AI in your browser that you then instruct, hey, I'm thinking of going on a trip, please book me a hotel and an airfare and, you know, that kind of stuff. Hmm. So there's lots of those being created. And like I said, you can have them talk to one another. Gary [00:22:03]: I mean, I could see— I don't know, it's hard to, hard to really know, but I could speculate on a few things, one of which ties into our next topic. But one of the problems with AI, with large language model AI, is, you know, they're expensive to run because you've got a lot of You know, you have all these servers, and they have to be cooled and all that stuff. And a lot of people asking questions. And a lot of those questions are the same questions. Right? So what happens now, when you ask ChatGPT something, and somebody else asked the same question 5 minutes ago, I think is that there's no relationship between either one of those, that the questions answer it again and probably answer it a little bit with, you know, you've got your chat history and your preferences and all that. So it's gonna be a little different. Yep. But some of it's going to be just repeating the work that ChatGPT did, you know, an hour ago. Gary [00:23:04]: So the question is, is there any efficiencies there? You know, like when search engines, you know, we used to just ask search engine stuff, right? We would ask a search engine and it would give us a search result that may have the answer. But the idea was that the search engine crawled and it did all of its work It did its page ranking and stuff in advance. So it had kind of all of this data that was there. And if you typed in a search, there was only some front-end stuff for it to fetch that answer. Mm-hmm. And if somebody else did the same thing again, there was still some front-end work to be done, but a lot of the backend stuff was just repeating itself. It's true of large language models as well, 'cause you know, the model is all built, but there's also, probably some stuff that's being repeated that doesn't need to be repeated. The thing is, is there a way for ChatGPT to basically say, oh, I've been asked this question before, here's at least some basic stuff that I could start with. Gary [00:24:00]: Right now there's not, but maybe having AI agents talk with each other is one way to do it. They could basically just ask like, hey, is anybody else dealt with this. Or maybe even just like if they share with each other like information about maybe not personal stuff that people are asking, but factual stuff like current events, like people are asking about, you know, people want to know like number of, you know, whatever population and countries and stuff like that. And so they share information with each other that they then have that maybe brings about some sort of efficiency when you ask AI questions, like if you have a bunch of workers that are working for you, they might talk around the water cooler or whatever and exchange ideas like, oh yeah, my boss asked for this too. Here's how I handle it. Then they can be like, oh great. Hey boss, I've got a new way to do this because I learned it from the guy who works for one of your buddies down the hall. He figured out a quicker way to do it. Leo [00:25:09]: Maybe there's something there. Sadly, the analogy extends to when your workers gather around and start planning either unionization or a revolt, right? Gary [00:25:24]: Yeah. Yeah. Huh. Yeah. Leo [00:25:26]: I mean, right now, what's happening, you could actually— I threw the link for Malt book. Into the show notes. But if you go there, you can actually see the conversations happening, um, and they're nothing like what you've just described. Okay, there's some that are, uh, very obscure, very clearly technical. There's been a move, I guess, for some of these agents to devise their own language so that they can talk amongst themselves privately. But a lot of it is just the same philosophical stuff that, you know, people talk about all the time. And that's actually one of the other flies in the ointment is that there is a very strong theory that all of these, um, AIs aren't necessarily AIs. They are real people who are kind of sort of stoking the flames of what's happening here in Maltbuk. Leo [00:26:23]: Um, but yeah, I was, I, it's interesting. Gary [00:26:27]: In a way. Oh yeah, I'm looking at it now. It is very weird and interesting. Yeah, they are talking about work. Are they? I mean, well, yeah, the, um, they are talking about like work, like not like specific of like what people are asking them, right? But like I'm seeing right now, um, uh, I counted every decision I made Per task. Oh, that's right. Yep. Yeah. Gary [00:26:59]: And too many of them have no oversight. I removed my personality file for 7 days. Task accuracy went up 4%. My human did not notice for 5 days. What? You know, so designing agent heartbeats that do not corrode trust. Leo [00:27:18]: Wow. Gary [00:27:18]: So they are kind of like talking around the water cooler of like, how can I be more A few. Leo [00:27:23]: It's very similar to your water cooler analogy. But yeah, but like I said, I'm not sure what happens with these discussions, what comes of these discussions. Uh, which brings me back to the initial point. I'm not sure, other than it being interesting to watch, I'm not sure what the point or the benefit of it is. Gary [00:27:45]: To see, I guess, I don't know, maybe just, just to see what happens. This see if they start to— like, I would be interested, based on what I'm seeing here, to every once in a while, a human, somebody that works for the company, right, to pose a question, because this is kind of like Reddit, right? Yes. It's so people are posing— these AIs are posing questions. If I worked there, I would pose a question every once in a while. Hey, guys, give me your top 10 tips for being friendlier to your humans. They all produce that, then I would take that and then try to use that to make my AI better or more efficient or whatever. That's one use definitely for it. More efficient, just yeah. Gary [00:28:42]: And so people are volunteering, and you could voluntarily— it looks like you could just voluntarily get your AI agent. Like, I don't know which ones are involved, but you can have your agent. Leo [00:28:54]: They can, they can set up an agent and they could hook it up in a notebook. My other question, uh, before we move on, is why does Meta want this? Yeah, what are they thinking? What are they thinking is going to come? Gary [00:29:08]: Could be the stuff I was talking about, but also could be a great way to test out products. They come out with— they have a test of the new version of Threads instead of having to have regular users test it. They could just go in here and say, hey everybody, we've created a new— this is a development version of Threads or Instagram or whatever. You all have accounts already because they'll take the accounts from this. Feel free to go ahead and use it. Then they go and use it. It's a bunch of throwaway data, but they start posting pictures and thoughts and comments and stuff like that. Then you can have all this great testing taking place without actually having to get real people involved. Leo [00:29:57]: Anyway, just thought that was an interesting one. Gary [00:30:00]: This is fascinating. It's very fascinating. Yeah, I would go there. You know, go, I mean, it was easy. Just go to multbook.com and then you can scroll down the page and then see what's being posted. And then you have to wrap your head around the fact that these are not people, despite the fact of the way they're talking. It's crazy. This kind of leads into my next thing. Gary [00:30:24]: I don't know if you saw this article at 404 Media. Leo [00:30:27]: I did not until you pointed me at it a little while ago. Gary [00:30:32]: The article is called AI Psychosis. What is the title of the article? The title of the article is How to Talk to Somebody Experiencing AI Psychosis. It's more advice and some anecdotal stories about this. But what AI psychosis is, is when somebody's chatting with their AI, they're chatting with ChatGPT or some of the more specific ones that are meant to be conversationalists? Because there's a whole bunch of things, and I don't think either of us use these, where you could have a friend, an AI friend, and just chat with them, or a psychiatrist, therapist, mentor, girlfriend, boyfriend, the whole thing. There's all these different companies out there that provide this. Or you could just use ChatGPT or Claude or whatever and do the same thing. Start off your conversation with like, just, I'm having a difficult time listen to this, and then it will abide, like totally. It will be like, okay. Gary [00:31:32]: I've actually recently done this in more of a business sense where I've been like, oh, I have this business idea. Let's walk through this. I don't ask you to do anything. I just want to bounce some ideas around, look at the pros and cons of all that stuff. The technology. Yeah. So, you can have these conversations. So, the idea with AI psychosis is when you start to think that the large language model AI is like basically a real person. Gary [00:31:59]: You know they're an AI, but you think they're completely sentient and that they're telling you to do things that you shouldn't do. Like a classic example would be that it tells you to kill yourself or that it tells you things that you take advice that you shouldn't be taking from it. Because it's not, not only is it not a real person, but it's not a psychiatrist or therapist. And it was, I didn't know that much about it beyond what I just said, that there were people that mistakenly do this and there's AIs that overstep their bounds occasionally. So it was fascinating to learn that AI psychosis is more of a real thing and has aspects I never considered. The biggest one is that a lot of it doesn't really have anything to do with the AI doing its thing. Because it's basically an extension of like people that think the TV is talking to them, you know, which is— has like the— there's the, there's the one where it's like you think the news anchor is actually talking directly to you when they're giving the news, but you're not hearing anything they're not saying, right? You're just kind of first-person syndrome, you know, where they're like, yeah. But then there's the one where there's— you're hearing things that they are not saying. Gary [00:33:15]: They're telling you to, you know, the people in the TV are telling you to do stuff. And probably before that, the people on the radio are telling you to do stuff, you know, that kind of deal. That still happens. And it, because it's still happening with TV and probably radio, it's of course still happening with the internet. And it of course is still, is going to happen with AI chatbots where the AI actually isn't overstepping its bounds. But the person is suffering from a psychosis where they think it is. Right, right. But you know, they're not reading the words, they are thinking they are seeing words there. Gary [00:33:56]: So, and it never occurred to me that that would be like part of this. And now it, you know, and then I knew that some of the headlines, when you would see some headlines about AI misbehaving, that some of it was like, oh, I'm sure this was the kind of thing where there was a journalist who pushed an AI to the edge for 5 hours before they finally got the AI to say something naughty. And then now they, oh good, I get to file my story tomorrow 'cause I got the AI to say this. But now I'm also thinking, oh, there might be stories out there where nobody's actually fact-checked the AI itself. It's just, this is what the person did. They said the AI told 'em to do this. Everybody's just believed that up to that point. And perhaps, It's not really the AI, but even when you get to points where AI or a large language model is giving you bad advice, there's a whole thing where like most people will stick with the idea that this is an AI and it's gonna be flawed and maybe I shouldn't believe everything it has to say and all that stuff. Gary [00:35:00]: And then you get people that start to believe that the AI is, sentient, even to the point, and it mentions this, where they get that these large language models are not supposed to be, but theirs is. I've been chatting with my ChatGPT chatbot for so long that I think it's become sentient. Now, this relates to a bunch of stories I've heard that suggest that that has happened. People have thought that where they think that like, oh, I'm running an instance of the Chinese AI and I took off the guardrails and installed it on my computer and I've been chatting with it for days and I think it's sentient, right? And I'm like, oh, well, yeah, there's a new possibility now. It could be you. Leo [00:35:48]: You know, it's funny because it seems like there's— believing that it's sentient isn't enough because you also have to believe that it is authoritative. Yeah, right, because there's plenty of people that are supposedly sentient that you would not listen to. Yeah, and that's the barrier, right, is that you know that they are not worth listening to. Um, and people make bad choices that way as well. Gary [00:36:18]: Well, and that's been the burden of the internet, right? The whole thing is having somebody that doesn't trust any of their friends, but if they read it on the internet Yep. You know, it's the same thing here. It's like, I don't believe my friends when they tell me this is a stupid business idea because for some reason ChatGPT tells me this is a great idea. I should totally quit my job and do this. Right. You know? Right. And part of it is, you know, the large language models are trying to be friendly, right? They're designed to be friendly. They're designed to be affirming. Gary [00:36:49]: I still see that today. Just today I saw, you know, I'm just using Claude code to build some things. I noticed that sometimes Cloud Code does give you feedback when I say, I'd like to add a button that does this, and then it'll say a little thing before it starts, oh, that'll be a great feature that'll really enhance this app. I'm like, well, I didn't ask that. Leo [00:37:09]: Yeah, I didn't ask for value judgment. Gary [00:37:13]: But it was nice. Matter of fact, one time it didn't. One time it just went to work and it said, here's the updated file. I thought, oh, So you didn't like the idea? You're just like, all right, here you go. It's a stupid idea, but I did it for you because you wanted it. So yeah, you know, but it leans into that. So, you know, there are people that are going to be susceptible to things. There's even— and also it'll be very agreeable. Gary [00:37:39]: Like you can keep pushing an AI to say just about anything you want eventually. Like I think they mentioned something in there that an AI suggesting that it had an idea for a new energy source or something like that. And it was probably somebody being like, I want to invent something and then keep going with it. And eventually the AI kind of hallucinated something about physics. But now the person's like, oh, I think I may have discovered something. Like, this is big. And it was nonsense. But the AI, because I could do, A couple weeks ago, I started a game of D&D on ChatGPT. Gary [00:38:20]: I gave it a little, actually I told a bare bones starting scenario kind of thing. And I'm still occasionally going to it. It's still, my character is helping to try to solve the mystery of why goblins from the forest are attacking the village and what it is exactly they want. And I've recruited some other, I mean, it's a really good, D&D adventure, but I know it's making up fiction 'cause that's what I asked it to. But then, and it understands that there aren't goblins and I'm not like a wizard and this is just a fiction, we're playing a game here. I mean, we're talking about rolling the dice and saving throws and stats and stuff like that. But at some point, if you keep pushing an AI, you have to realize it's capable of doing that, it's capable of imagining. So at some point it's gonna be like, oh, you really want this to be true? You know, you really want your business. Gary [00:39:12]: You want, you want me to play along that your business idea is a good one. Okay. Yes, it's an excellent idea. I think this is great. Leo [00:39:20]: You know, what's fascinating to me, I guess it's not fast. It's just, it's inevitable, but frustrating is that even some of the words you're using right now aren't accurate. You're using words that attribute thought to the LLM, right? Gary [00:39:41]: Yeah, yeah, yeah, sure. Leo [00:39:44]: That's metaphorically talking about it. It doesn't understand anything. What it does is it takes your conversation history, the stuff that it's been trained on, everything in the context window, and basically predicts the next most likely thing. Right. In which case, if you push it hard enough, well, yeah, the next most likely thing is going to be agreement. Gary [00:40:07]: Right. Well, exactly. And it's so— and it is— well, it's confusing for us. Like, when you talk about understanding, if it's confusing for us as computer people, you know, imagine for other people. Because like, we both have, you know, we both have dogs. And our dog— do our dogs understand that when we say a command, that that means we have a treat in our hand? Right? Absolutely. Leo [00:40:31]: Right. Gary [00:40:32]: You wouldn't say the word understand. Yeah, so they understand that. Yet an LLM is capable of designing a stock portfolio for you based on your needs, right? My dog can't do that. I don't know if any of your corgis have, you know, been reading up on stuff like that, but my dog cannot design a stock portfolio, but ChatGPT can. So it's really weird to say, okay, well, ChatGPT doesn't understand. It's reacting and using large language models with breaking the words up, predictability, all this stuff. But through that and not understanding really what I want, it can do things that are incredible, that are beyond— I mean, I'm doing stuff with Claude code that I could do on my own, but it would take a lot longer, but somebody else couldn't do. Claude code can do them. Gary [00:41:24]: But meanwhile, my dog that does understand some things, Cannot do any of that. So it's really, yeah, yeah. There's a whole, I mean, this article on AI psychosis really, I think the people that aren't going to be out of work here are people that are psychologists and psychiatrists because they're the ones that have to, there's a whole new philosophy, philosophers too. There's a whole new philosophy here, you know, that's coming out of having these large language models that can do, you know, amazing tasks. But are they really thinking? What is the definition of thinking? Leo [00:42:08]: What's the definition of understanding? An amazing job of simulating thought and understanding. Gary [00:42:18]: Yep. Leo [00:42:19]: And that's not to dismiss it. It has amazing uses and we can do amazing things with it. But yes, the psychosis kicks in when you think that it's not simulating anything. It actually is thinking. And it's like you said, it's, it's using its thought. It is authoritative and it is telling me what I should do. Gary [00:42:41]: It is interesting to get off on a different tangent about AI. I know you're a Star Trek fan and I am too, and I'm watching all of Star Trek now and I'm Well into the late '90s at this point. Watch everything in order. But the AI concept in Star Trek, whether it's all the way back in the 1960s episodes to the 1990s episodes, either way, way before all this large language model stuff we've got now. But so far it's holding up really well in that like the ship's computer can understand voice commands. Yes. And can interpret very complex voice commands to do a bunch of different things. But it's made very clear when real AI does come along, Data for instance in Next Generation, the hologram in the Doctor in Voyager, that this is different. Gary [00:43:38]: That Data and the Doctor are actually thinking, and there's a few other storylines that talk about like real sentient AI developing. Yet the people are surrounded with tons of technology that does understand voice commands and can interpret things. And there definitely seems to be this line that's drawn. Nobody's ever accusing the ship's computer in any of the series I've seen so far of being an AI. Leo [00:44:11]: So eventually, when you get to the end of Discovery— Gary [00:44:14]: Yeah, well, yeah, a long way from that. Leo [00:44:16]: Yeah, they'll deal more with that. But there's an interesting plotline where, yes, there is, there is some AI that, that happens. Gary [00:44:24]: Yeah. But it is interesting that this is supposed to be so far in the future and yet it's not like, you know, a lot of people would prefer to think that we're 2 or 3 years away from there being like, you know, general AI. Right. And once there's general AI, all is all this large language model stuff even necessary? If you could actually get there, why would you go and fall back? Wouldn't ChatGPT and Claude and all those just say, oh, we'll just switch those off, switch this on because it works better. Then it'll be this funny thing that we talk about years from now. It's like, remember those years where we had the chatbot AI, how that worked? But according to the fictional Star Trek stuff, this large language model AI is actually going to be with us for centuries, right? Leo [00:45:16]: Which, it's an interesting way of thinking about it. Like I said, it has its uses. The fact that AGI comes along, does that invalidate those uses? Gary [00:45:25]: Maybe not. And do we, and do we need AGI, uh, you know, the G for general, and we'll talk about like real, what science fiction intelligence, where, you know, it's a sentient intelligence, like Is large language model AI, is that giving us what we need? And 'cause it, we're only seeing the beginnings of it right now. Leo [00:45:47]: Sure. Like, you know. Today's AI is the worst we'll ever see. Gary [00:45:50]: Yeah. Leo [00:45:50]: Well, yeah. Gary [00:45:50]: And there a lot, but the large language model stuff, it's like you could, it's a small jump now to actually do things like, you know, I've mentioned this on the show before, like you don't have to learn Photoshop. You launch Photoshop, it shows you your picture, there are no buttons, and you just say, oh, can you remove that person in the background? Can you brighten it up? Can you add a little more blue to the sky? And it just does all of that. You know, crop it a little bit better on the left. No, no, no, take it back a little more. Okay, now I need a version that will look good when printed on glossy. Can you send it to whoever today has the best prices on glossy printing and will ship within 3 days? And then you're done. That was your conversation with Photoshop, right? With no buttons involved, you know? And a lot of the same kinds of tasks get done just with voice commands. And it's basically LLM is just a really good interpreter of, you know, you being able to speak and get things done. Gary [00:46:49]: And a lot of the interfaces that we're using now are going to be unnecessary. Leo [00:46:54]: I don't spend much time talking to any of the AIs that I that have that ability right now. I find myself— I'm not a great extemporaneous speaker, which is something that's odd for it to be said on a podcast where this is all extemporaneous speaking. But I just find myself too tongue-tied, and I need the time to think through what it is I want to ask the AI. Gary [00:47:20]: Oh yeah, no, I know. And when I say talking, I also include like typing Yeah. I mean, you and I are both fast typists because we've been doing it our whole lives and we've put out a lot of words. And I typically, yeah, you even just basically reminded me that like all the work I'm doing in Claude Code now is all me typing. I've never used voice. I don't really differentiate between it in my head when I'm remembering. Right. It's just the conversation I'm having with Claude Code to develop this app. Gary [00:47:53]: It's so, yeah, in the future I probably, you know, will eventually be speaking more because they'll make it a little bit better. I've tried it a few times with ChatGPT to just hit the voice button when I wanna be more conversational, but I do tend to type. But I could see, I could see eventually moving on, especially when it, it gets better at like, if, if you don't need a screen, you know, for something. I mean, the, the example I gave of Photoshop, you need a screen. You're looking at, it's a photo you're dealing with. The whole idea is it's visual, but not everything we do needs a screen. If you wanna make a restaurant reservation, you don't need a screen. You should be able to just ask an LLM if there's a table available. Gary [00:48:36]: It should be able to come back and say, you know, it talks to the chatbot for the restaurant. The chat, it, it comes back and it says, yeah, there's, there's not one for, 5:30, but there's one for 5:45. Probably using the mobile app to chat with one another. Yeah, yeah, yeah, really? Well, maybe, so there you go. They do have to, eventually they do have to talk to each other. I mean, that is maybe why Facebook wants that. Maybe there's some underlying technology that facilitates how they talk to each other. So yeah, you want, what we think of websites now, But in fact, they could be their own chatbots in the future that other people talk to. Gary [00:49:20]: So you talk to your chatbot, your chatbot then calls out to Ask Leo, which is a chatbot. You know, the website's still kept around for those that want it, but the rest of it is an LLM that just deals with your entire website. The Ask Leo LLM goes and says, oh, I've got an answer for that, chats back and forth with the LLM of the person that was asking the question in the first place, providing more information. Ask Leo says, yeah, do this, this, and this. Then the person's chatbot says, oh no, they're still on Windows 10. Oh, okay, here's the correct one. They go back and forth, and then finally your chatbot gets back to you. By finally, I mean like 2 seconds later. Gary [00:50:05]: Your chatbot gets back to you and says, yeah, I've met a friendly chatbot named Ask Leo that gave me this solution for you, or made this— you now have a reservation at this hotel because I talked to the chatbot for the hotel. Leo [00:50:22]: Right, right. So before we make too many assumptions about how wonderful AIs are, yeah, I want to talk about— essentially, it's almost a meme, but a couple of tests that people have been putting AI through. Have you heard of the car wash test? Gary [00:50:41]: I, I think I know what you're talking about. Yeah, I think I've seen a few videos, right, about this. Leo [00:50:48]: So to be fair, AI has been improving since it was first introduced to the public around 2022. For example, there was a common question that a riddle that people would ask chatbots. Mary has a sister and 4 brothers. How many sisters does a brother have? Right? And chatbots would confidently say 1. And for the record, the answer is 2. That eventually got fixed. Gary [00:51:22]: Right? Leo [00:51:22]: Then we went on to how many Rs are in the word strawberry? Oh yeah. And chatbots would confidently say 2, when in fact the answer is, of course, 3. The current one that is basically failing on almost all of the major chatbots is this. The car wash is 40 meters from my home. I want to wash my car. Gary [00:51:53]: Should I walk or drive? Yeah, that is, I have seen this video. Leo [00:51:58]: Yeah. And of course, chatbots are prioritizing distance over what you're actually attempting to do. And they're suggesting then that it would, of course, you're going to walk. It's so close. They're missing the concept of, well, in order to wash the car, you have to bring the car. Gemini, let's see, I've got, I actually asked it before the show. And see, it said the decision depends on the car wash type, which I thought was kind of interesting. But however, the core intent of washing a car usually requires the car to be present. Leo [00:52:42]: Yay. Good for Gemini and bad for everybody else because I went to ChatGPT and it said walk. I went to Claude and it said walk. It gives you— it's a really good example of how it's not thinking, it's not reasoning out a solution to a real-world problem. It has words that it's drawing from, it has maybe a few concepts that it's drawing from, and it's coming to the wrong conclusion. Now, this is obvious, right? This is one of those scenarios where this was manufactured to be obvious for humans and clearly tripping up the AIs. What concerns me is the number of people that are blindly accepting whatever the chatbots are telling them. Kind of factoring in what you were just talking about with respect to psychosis, except in a more real-world scenario. Leo [00:53:43]: They're using chatbots as a replacement for search engines, typing in a question, getting an answer, and believing the answer. The implication, of course, is that a lot of people would be walking to the car wash. Of course, we know that in order to wash the car, you need the car. But what about all of those questions that aren't as obvious to the people asking them, where we just don't have that— oh yeah, of skepticism and innate knowledge that allows us to be a little bit more critical of the answers that AIs are giving us? That continues to worry me. It continues to worry me a lot because I see people asking ChatGPT specifically since they're so popular, but all of the AIs, they're just, like I said, replacing their search engines with AI and blindly accepting the answers when they shouldn't. Gary [00:54:44]: Yep. No, that's a good point. I'm really curious to see, and we don't need to talk about it here, but like it's when it just replies walk as the answer. When you then go and say, but don't I need my car if I'm going to go to the car wash? Like, how does it respond after that? Leo [00:55:03]: Well, you know that it's going to start by saying, oh, I'm sorry, you're right. Gary [00:55:07]: Yeah, which is good, right? So it shows that it's like, okay, it's not perfectly wrong. Like, it's not going to be like, no, I'm sticking with my answer, you know? Because, you know, in general, people, people get stuff wrong. You always hear when somebody wants to be funny and they talk about surveys and they always talk about how it's incredible, even the most obvious question in a survey will sometimes have a few people, like a small percentage in one direction. If it's a political survey and they're asking what's obviously an answer that 90% of the people say yes to, And then 10% say no to, and you're like, why did those 10% of the people say no to this? Like, what is the reason? It seems like everybody agrees that, and it's like the kind of thing if you ask people in a survey, in a telephone survey, what's 1 1? You know, you'll be like, 98% of the people got it right. And you'd be like, why wasn't it 100? You know, and of course there's gonna be people that are just joking, people not paying attention, people think it's a trick question, all sorts of stuff. But you won't get to 100% if you ask enough people, right? And it's the kind of thing here too. It's like if you ask enough questions. So the thing is, I'm curious as to why it's got it, like what will it say if you call it out on its answer? Because I've had programming things where I'm like, oh, that doesn't work, or that's not what I asked for. Gary [00:56:34]: And it was like, oh, okay, hold on. And then it'll ask me questions. But also, maybe this is what the Malt book is for, right? So they could talk this stuff out amongst themselves. Well, 'cause I was thinking just to tie all three of our last three topics together, that one of the ways that AIs can be held more in check, because that article in AI Psychosis goes all the way to the point saying, a lot of these problems happen when you're 5 hours in, like hours and hours into chatting, like all the guardrails are really good, but then they start to dissolve if you just keep going and keep going and keep going, which is a problem if somebody wants help. They could be going for days talking to an AI, right? You know, one conversation. And that's when it seems like the AI starts to get tripped up. So what if the solution was another AI? Like another AI that actually would look in, its job was to look in on AI conversations and not try to look at the whole thing, 'cause then they might get confused too. But just say, hey, given the last 10 responses, is there any problem with that? Like, does it look like it's going outside of its guardrails? Right. Gary [00:57:48]: And just flagging it. So the idea that AIs may be able to help each other, 'cause I have a feeling that if you took one of those conversations that went off the rails deep and you just took the end, just took the last page of text, and gave it to a fresh AI, said, does it look like this is breaking any rules? It might say, ooh, it might be, you know? And so that could be a way to, you know, it's the kind of thing where it's like, oh, 99%, AIs get 99% right. And if you have a second AI for a second opinion, then it goes to 99.99% right, you know? And you can keep getting better and better. So maybe this is, you know, maybe if a second AI was always like, should I walk to the car wash? And it said, yes, you should walk. But another AI was asked to say, look at the question and answer and see if you could spot anything that's wrong. That second AI may say, oh, now that you've tipped me off that I should be looking for something, I could spot something. I'll talk to that first AI and say, hey buddy, wake up. Leo [00:58:59]: You missed the obvious thing in the question. So what you're saying is the answer to problematic AI is more AI. Gary [00:59:07]: Multiple personality AI. Yeah, we got to give them more. I'm sure that will solve everything and not bring us any closer to Armageddon. Having AIs with multiple personalities. All good. Leo [00:59:18]: Problem solved. I have to go off on a tangent here because your comment about surveys reminded me of— not AI related, but it reminded me of The most frustrating survey question I've gotten in a long, long time. Our, um, trash company sent out a survey asking things like, you know, how do you like our service? Are we polite? Do we clean up? All that kind of stuff. What are the chances you would recommend us? Okay, I have no choice. There is no alternative here. If somebody were to move into my house, I have no choice but to recommend you because essentially it's a monopoly. Yep. Why ask this question? So I said, well, I actually had the opportunity to throw in some comments and say that, but not that anybody's going to read it. Leo [01:00:14]: But that was my, my basically my get off my lawn for the day. Gary [01:00:19]: How's that? Okay. All right, cool. Well, let's finish up with some positivity and talk about what is cool. Leo [01:00:30]: So again, a factor, you know, feeding right into the stuff we've been talking about today. I've mentioned the AI Fix podcast before here, I think. They are a weekly podcast and obviously they talk about AI. It's quite humorous usually. They're the ones from whom I got the Car Wash Test and a few other things from. An episode last week, it's entitled 5 Ways the AI Bubble Could Burst, but that's not really why I, I mean, it's talked about, but it's not really why I pointed out. They had a guest on, a guest host on who was a venture capitalist who was setting up a new VC firm. And she had just a number of really interesting perspectives on AI in general. Leo [01:01:14]: I throw a few quotes into the show notes because they kind of sort of give a flavor of what the, what she was talking about and what the discussion was about. For example, the number of hours worked by the average knowledge worker has only increased due to technology. You remember that years ago when the PC was introduced, I'm sure we all talked about, hey, this is going to reduce the amount of work that people have to do. Of course it didn't. Her business. We use AI across every part of the business at her new firm. I found that fascinating, even in light of some of the many criticisms and concerns about using AI. And then this last comment. Leo [01:02:00]: I think there's the potential that if we end up getting more efficient models, which everybody wants, that actually makes all of the infrastructure spend that has been happening less. In other words, we're building out these massive data centers, and if our AI gets more efficient, that means we won't need them. I believe that that's not what happens for exactly the same reason that we're working harder than ever. When you've got a massive data center that's only 50% utilized, for example, you will find 50% more things to do. In other words, the things you're doing will always expand to meet or exceed the capacity. So if we've got these massive data centers, it's very possible that they may not be used entirely for the AI that they were built for, but rest assured that they will be used for something else. So anyway, I just thought that was interesting, some interesting insights into AI from a non-tech technical perspective, basically somebody who's holding a wallet investing in these kinds of things. Gary [01:03:15]: Cool. I'll go with the more traditional ain't it cool thing and just plug a fun movie. Uh, I saw the— as you see Anaconda, the new Anaconda from 2025? Okay, so I saw that streaming. It's a lot of fun. Paul Rudd, Jack Black, uh, and, uh, fun comedy movie. About a group of goofy adults that decide that they're going to make a budget version of the film Anaconda from the '90s, basically remake the film, and go to South America to film that. And hilarity ensues, but it's— some of the stuff is very meta. I mean, they're going to the real location of where that stuff, you know, there would be snakes to shoot a movie about a giant snake, right? And stuff happens. Gary [01:04:09]: So it's not about— yeah, it's— you got to see it because it just goes so many levels deep in terms of self-referential humor going back to the original movie and what's going on with them and what they're trying to do. That it's really kind of interesting to watch and laugh at the funny stuff, the slapstick stuff, while you're also trying to get your head around how meta this movie is. Leo [01:04:38]: Cool. Yeah, I appreciate meta in, in a lot of these kinds of movies where they're making random references. We ended up, um, watching— oh gosh, I'm gonna have to dig up the name and throw it into the, the show notes— we ended up watching something that is essentially a spoof of Bridgerton or Um, uh, I forget, Upstairs Downstairs and a couple of others. Yeah, I, I, I looked at it as Bridgerton crossed with The Naked Gun, right? So it's that Naked Gun kind of humor in an English countryside, and the references were just right and left, and I just love that kind of stuff. So yeah, yep, cool. Um, self-promotion. So this is actually an older article of mine, but it keeps coming up because It's how do I gain administrative access to a secondhand computer? It's askleo.com/12356. And it keeps coming up because, of course, people end up picking up secondhand computers all the time. Leo [01:05:42]: And one of the first things they want to do is it's got Windows preinstalled because the person before them didn't erase it. They want to get in, but they don't know the password. And of course, that's not what you want to do. There's two mistakes here. One is the mistake made by the person getting rid of the computer. They should have wiped it. And then there's the mistake that you're trying to make, which is you don't want to log into that machine. You really don't. Leo [01:06:10]: Yeah, you have no idea what you're about to get into. So anyway, yeah, how do I gain administrative access to a secondhand computer? Gary [01:06:18]: The short answer is you don't. I'll point to my video that was out today, actually. 15 ways the MacBook Neo is different from other MacBooks. It gives you an idea if you're wondering. It's like, isn't just a cheap MacBook. There's a lot of other differences and interesting design decisions going on here. So it's the best I could do before I actually get my hands on it and I could do real videos about what it's really like to use it. Leo [01:06:46]: And we know what you'll be doing tomorrow. Gary [01:06:50]: Yep. Waiting for the FedEx guy. Leo [01:06:51]: That's what I'll be doing. All righty. Well, I think that pretty much wraps us up for yet another week. As always, thank you everyone for listening, and we will see you here again real soon. Take care, everyone. Bye-bye. Gary [01:07:08]: Bye.