Transcribe your podcast
[00:00:00]

Welcome all you cool cats to The Neuron. I'm Pete Wong. Today, the launch of Rabbit R1. After a disastrous launch by their competitor Humane, how is Rabbit faring in the public eye? Next, ChatGPT now has memory. How does it work? And does this direction actually make sense for ChatGPT? Finally, Perplexity and devon are now worth billions of dollars. Is this some get-rich-quick scheme by AI engineers, or what's actually happening? It's Thursday, April 25th. Let's dive in. Our first story is about Rabbit, one of the companies looking to make AI devices a thing. They're building R1, an AI in a box that's a touch wider than a smartphone and probably weighs about two-thirds as much. The AI device category has been white hot with activity. It's a full-on sprint to get something out on the market as fast as possible while still making it useful enough. Some names for you. There's There's Meta's Ray-ban glasses, there's limitless by rewind, the Rabbit R1, and of course, the Humane AI pin. Now, Humane learned the balance between speed to market and usefulness the hard way. Two weeks ago, they released their product and got absolutely destroyed by the tech press, partially because Humane hiked their own device to no end.

[00:01:22]

They said they'd replace the smartphone, and they wanted to charge you $700 and an extra $24 per month one subscription to do it. But the product didn't work. It overheated. It wouldn't respond to things. It would outright refuse request and mishear you. And nearly every review we saw said it was the worst thing that they had ever seen. Of course, Rabbit was just watching all of this. They were slated to launch only a couple of weeks after Humane did. Now, here's a tweet from Rabbit CEO, Jesse Lou, about the time when Humane was blowing up. He said, In about seven days, our one reviews will be out. We are ready to face any criticism, and we will fix any issues that we need to fix. Rabbit will keep growing fast, and we are ready for this. You don't just beg for the future, you have to build it. And building is quite different than talking. So that brings us to Tuesday, when Rabbit hosted a party for press and their early customers picking up their devices. And it's clear that they were very careful to avoid the mistakes that Humane made. Jessie Lou made sure not to over promise.

[00:02:26]

They kept on saying things like, We're going to work on it. We're planning on building this thing. It's on the roadmap. But here's what R1 is actually launching with today. You can point the camera to something and ask the R1 what it is. You can ask us some basic questions. It also has early integrations with Spotify, Uber, DoorDash, and Midjourney. You can use voice commands to play music or order a car or even generate images. Now, what it can't do, well, pretty much everything else, but that's fine, I guess. They didn't promise these things out of the gate, just that they're going to have it at some point. The roadmap on the Rabbit website actually reads pretty well. Upcoming on the R1 is navigation features, reservations, and ticketing, and research to help you understand a certain location or point of interest. All things that we would love to have in the V1 that's not quite there yet on launch. This is summing up to R1's overall vision, which is to be a personal assistant. At $200 today, the Rabbit R1 is a fun toy to play with and a way for you to follow the journey of an AI startup trying to build the next thing.

[00:03:31]

And in fact, Rabbit has already convinced 100,000 people to buy these early devices to be on that journey. One reason to be excited about Rabbit is that they're building AI to help us do things. Now, ChatGPT and similar chat bots, they're using something called language models. They generate language. Rabbit is building something called an action model, an AI design to do things. So eventually, you can imagine the services that the Rabbit R1 is plugging into, expanding way beyond Spotify, Uber, and DoorDash. It can make a spa reservation for you. It can check when the gym closes. It can send flowers to your partner, all using voice commands on this little device. But before we get too hyped up about Rabbit, let's talk about the elephant in the room. I mean, wasn't this what Siri and Google Assistant and Alexa were supposed to be? Absolutely. Here's commenter DAGmx on Hacker News. It reads, I'm very unclear why these, meaning Rabbit and Humane, aren't just apps. I just don't see people carrying these in addition to their phones and dealing with the split interaction ecosystem. Now, this commenter has a point. You know where you can play music and order Ubers and order DoorDash?

[00:04:41]

On your phone. You know where you can ask an AI question to have it do things for value on your phone once Apple upgrades Siri with the latest AI. And by the way, we know all of that is coming. Apple, Google, and Amazon are all working on AI models that make these kinds of upgrades to Siri, Google Assistant, and Alexa possible. Now, Rabbit a smart move by not saying they're out to kill the smartphone. They would have gotten destroyed by the media, much like Humane got destroyed. But that doesn't mean they won't ultimately be compared to the smartphone. You have to ask if consumers are really going to carry around a separate device when both of them do similar things. Your big takeaway on the Rabbit R1. Ai devices are an experiment. They're flashy and they generate a lot of attention, but they're still experiments. This new wave of AI has made it possible to re-imagine how we interact with technology at every level. Chatgpt made waves by showing us that it's possible to talk to our software like we talk to our friends. Companies like Rabbit, Humane, Tab, and all the others have this idea in their heads that we can interact with our devices in new ways as well.

[00:05:48]

If I had to guess, that's probably right. Their attempts to build new companies and new products around this idea are completely worth it. The rest of the question is about who actually wins If this idea turns out to be right, it might not be them, even if they had the right vision of the future. It might be that completely new devices aren't the right way to manifest that vision, and that simply upgrading our iPhones is the best way to make that happen. We can't predict the future on that. It's all one big test to see if these new startups can actually build a very good product and if consumers are keen to pick it up in their daily lives. The only way we'll know is if they try. Our second story is OpenAI officially giving ChatGPT the ability to remember things. Every week, over 100 million people log into ChatGPT, creating God knows how many conversations. You know what they have to do every single time? They have to tell ChatGPT who they are and what they want. Here's why. Think about ChatGPT as a robot intern that lives in your storage closet. Every time you need to talk to it, you turn it on and you take it out of storage.

[00:06:58]

Once it boots up, you can tell it what you want it to do and why, and it'll do it. Then once you're done, you power it down. But once it shuts down, it immediately resets to factory settings so that when you turn it on again, it doesn't actually remember a single thing about what you had said last time. You have to start over and tell it what you wanted to do and why again. With this memory update, ChatGPT now carries around this little notebook. As you have conversations with it, it's jotting down little tidbits in its notebook. Or if you say, Hey, you should probably write that down, it'll be like, Oh, okay, yeah, let me go ahead and do that. So that when you restart it again, it's still reset to factory settings, but now it has this little notebook. When it wakes up, it reads all the pages in the notebook to get caught up again. This is ChatGPT with memory in a nutshell. When you say something like, I teach 10th grade math, ChatGPT will now remember that and factor that into how it responds to you. It'll do it across every chat that you create.

[00:07:56]

Here's an example. I'm going to Mexico City sometime in the next couple of months, and I I need some recommendations on where to say. When I start with a prompt like, Help me plan a trip to Mexico City. I need to be in this area. Can you give me some hotel and restaurant recommendations? Chatgpt's response will first say, memory updated, and then proceed to answer. When I open the memory section in my settings, we'll see it added this phrase reading, Is planning a trip to Mexico City for a wedding in this certain area? When I say, I really like seafood, can you update the restaurant recommendations? It adds this phrase, Enjoys seafood. Now, we've been testing memory since they first put it in beta. In general, it's interesting, but it does introduce this new problem of managing the memory itself. Some of the information that's useful to put into ChatGPT memory, it doesn't change that very often. It's your background, where you currently work, what you like. It's helpful to not have to explain that stuff over and over. But when stuff changes a lot, it gets confusing actually to understand what it's remembering and when.

[00:09:00]

It's because ChatGPT tries to update any conflicting information. When I first said I was planning a trip to Mexico City, it saved that bit of information. Then I said I was planning a trip to Dallas, so it removed the first snippet about Mexico City and then saved the snippet about Dallas. That means it won't remember that I actually went to Mexico City. If OpenAI is labeling this as personalization, I'm going to want it to remember that I went to Mexico City and I loved having seafood there and I loved going to these places because that's what I'm going to ask for on on another trip. Having ChatGPT properly figure out the difference between conflicting information and multiple things that can be true at the same time, that actually feels like a solvable problem. The bigger issue is actually much simpler. When am I going to tell ChatGPT that I went to Mexico City? You go into ChatGPT to get help with something, but you don't go back to tell it how that thing got resolved. How is it supposed to be a fully personalized AI? It comes down to whether ChatGPT is simply a work tool or an all-encompassing thing that's supposed to be everywhere in your life.

[00:10:03]

Right now, it's much more of a work tool. You go to it when you have a problem. Whether or not it transcends work, almost like Rabbit and Humane want to, that's an open question. Now, in the meantime, if you want to use ChatGPT memory, the best applications are with work-related information that doesn't change a lot. The type of company you work for, the work that you do, this is the what of your work life. You should also pair this with other settings you can configure called Custom Instructions, which is how it responds to. Be concise and don't add all this unnecessary fluff. Boil down technical language for me. Give me things in a bullet point list. These types of modifications can save you a layer time in using ChatGPT. Your big takeaway on ChatGPT and memory. Ai should mold to who you are, no matter if you're using it in your personal life or for work. That's evident in the way that ChatGPT is being used today. Most of the use cases which is look like assistant or thought partner type of actions. Things like, help me think through my situation, write an email for me that says this, can you help me research this?

[00:11:09]

But those have personal flavors to them. You go to your friends to help you think through stuff because they know you and what you care about. You want that email that AI writes to sound like you. You want that research to consider what you like and don't like. Now, there are a ton of ways and new tools that can make this happen. Chatgpt can do this with memory. All the AI device companies like Rabbit, Humane, et cetera, are all solidly in this lane. Even writing, journaling, and email apps like Grammarly or Superhuman have some personalization. We're all tired of asking how we can get AI to sound like us. What we really want is AI that understands us without making it a headache to teach it about us. Our final story of the day is fundraising stories from Perplexity and Devon. Perplexity, the soon-to-be $3 billion startup trying to remake how we the web using AI, who suddenly tripled its valuation in just a couple of months. And Devon, the now $2 billion startup that launched six months ago and has made zero revenue. Let's take these one at a time, starting with perplexity. Now, we love perplexity.

[00:12:16]

If you think Google is filled with spam and overly optimized buying guides that you can't trust, then you should look into perplexity. Instead of typing in magic keywords and having to crawl through a bunch of links to figure out if you can find the right information, perplexity turns search into real questions and answers. Perplexity's fundraising history as a startup has been crazy, to say the least. In April of 2023, investors valued them at over $100 million. At the end of 2023, they raised around valuing them at over $500 million. And just a few months after that, they had a third round valuing them at a billion dollars. So in one year, they go from $100 million to $1 billion. And that's not even the end of it. This week, TechCrunch reported that they're already, already on the hunt for yet another round, this time raising $250 million, valuing them at $2.5 to $3 billion. That is a crazy amount of money at crazy valuations in a crazy amount of time. But big valuations make sense for companies that have the numbers to back them up. If you make a lot of money, ideally profit, you deserve to be worth a lot.

[00:13:27]

Or if you're growing a lot, you also deserve be worth a lot, assuming you expect to be profitable at some point. So let's take a look at the numbers for perplexity. From Bloomberg, perplexity Today has 10 million daily active users. They have $20 million in annualized revenue. That's a lot of people. 10 million users and $20 million in revenue is no joke. But is it worth $3 billion? Let's look at Asana, the project management software company. They're public. They're currently worth about $3 billion. In the quarter ending January 2024, they had $171 million dollars in revenue. Not for the year, just those three months. Now, there's a Silicon Valley benchmark that says the typical startup growing at a healthy clip should be worth about 10 times their annual revenue. It could be lower, like five times if you're growing slower or like 20 times if you're growing faster. And at $3 billion evaluation and $20 million in revenue, perplexity would be worth 150 times. Problems. But if you think that's wild, then we need to talk about Devon. Devon is a new product that début about a month ago, in March 2024. The company making it has been around for about six months.

[00:14:42]

Devon is built as an AI software engineer, Meaning you give it some engineering task and it will do whatever it needs to in order to get it done. That includes looking at the documentation, figuring out how the software works, actually writing code, everything a software engineer should be able to do, but it's completely done by AI instead. Sounds cool, right? But early reports for Devon are just that. They're early. Right now, Devon is only successful at tasks that are more tightly defined. It's pretty far off from doing the complex stuff where an engineer has to think really hard. Some software engineers are completely skeptical. One of the top posts on Hacker News this month was a video saying that the company's demo of Devon was nothing more than a staged demo that's not grounded in reality. Plus, this entire time, you can only get Devon if you get off the waitlist list. Nobody's paying for it. Back to the numbers. Perplexy was already crazy if they get valued at $3 billion, which is 150 times their current revenue. Devon is now worth $2 billion as a company, but their revenue is, for all intents and purposes, zero, which means the company is worth infinite times their current revenue.

[00:15:52]

You can't even define the number. I'm saying all this because I want you to understand that you're not crazy for thinking perplexy, 10Xing their valuation in a year, then tripling in a couple of months is crazy. You're not crazy for thinking a six-month-old startup is not worth $2 billion. You're not. It's definitely wild. But I also want you to understand the flip side of this, why the investors are willing to value them so highly because it actually speaks to how big this whole AI thing could get. The simple answer is that investors are willing to value Perplexity at 150 times their revenue and Devon at $2 billion despite zero revenue because they see an opportunity for both of these companies to be worth insanely more. It doesn't really matter what number you buy their stock at right now. Let's take perplexity as an example. Alphabets, which is Google's parent company, is worth about $2 trillion. If perplexity really becomes the next Google, even if it's only worth $1 trillion, which is half as much as Alphabets today, whoever invested in perplexity at $3 billion would make 300 times their money. I mean, yeah, that's going to be less than the people who invested at $1 billion.

[00:17:03]

Those people would make a thousand times their money. But for anyone who couldn't get their money in when the company was a billion dollars, even $3 billion is great. I'd take 300 times my money over nothing. And same thing with Devon. If their product ends up being real and replaces software engineers en masse, that could be a trillion dollar company. So investors who really believe in that possibility look at the $2 billion price tag today. And again, they say, Look, I'll take 500 times my money. Your big takeaway on perplexity and devon. When you see these insane valuations for these AI companies, it's not because the company is actually worth that amount today. It's because investors see a massive opportunity ahead. Ai has the potential to fundamentally reshape so many industries. In perplexity's case, it's taking down Google. In Devon's case, it's changing how software engineering gets done. That applies to everything you can think of, the legal industry, all of professional services, in fact, all of sales and marketing. If the promise of AI bears out to its full potential, there are a lot of new companies to be created that'll be worth a lot of money.

[00:18:10]

For investors, the specific value that they give the company is less important than being invested in the company at all, which can turn into these crazy valuations. Still, these valuations and fundraising mean nothing until the companies can actually make useful products and sell them. These rounds are, or at least they should be, the start of the journey, not the end. Some quick hitters to leave you with today. Openai and Moderna, the biotech that made a bunch of our COVID vaccines, gave new details about how they're using ChatGPT. Some highlights for you, 100% of their legal team uses it. Moderna has 750 custom GPTs across the company, and each user has 120 conversations a week on average. Example use cases include data analysis of clinical trials data, legal contract summary, and preparing slides for quarterly earnings calls. A new study showed that using generative AI in healthcare didn't actually save any time for the clinicians. It did, however, reduce the feeling of burnout. It's an unexpected way for the impact of AI to play out since sales pitches for AI tools often revolve around time savings. Meta's stock dropped 15% after the company announced financial results yesterday.

[00:19:24]

Even though they made more money from ads, they also said that they're going to be upping their estimates on investments relating to AI. The new forecast, $30 to $37 billion for the year on things like data centers, GPUs, and R&D talent. This is Pete wrapping up The Neuron for April 25th. I'll see you in a couple of days.