Today our guest is Eugenia Kuyda, Founder and CEO of Replika. Replika is an AI chatbot that was made “with the idea to create a personal AI that would help you express and witness yourself by offering a helpful conversation.” Millions of users have interacted with Replika, and it’s known for building strong emotional relationships with users, especially helping those experiencing loneliness and social isolation. Eugenia tells us the story of how Replika came to be, what makes it unique among AI chatbots, and the difference between AI companions and AI assistants. We also talk about some of the real impacts Replika has had on users, learnings about the nature of friendship, and what our future with AI may look like.
Note: This post may contain transcription errors
Eugenia: So far, no one has actually built a virtual human. Our goal is to show that we're able to bring this to reality.
Noor: Eugenia, so amazing to have you on the pod. How's it going?
Eugenia: Thank you so much for inviting me, Noor. Everything's great. How are you doing?
Noor: Good. Good. Not too bad. Could you just start by explaining, you know, what Replika is, um, and just how it got started?
Eugenia: Sure. Um, so Replika is an AI companion, or I think of it as an AI friend for anyone who needs one. We started. Years ago, we started over, I guess almost 10 years ago now, uh, working on conversational AI technology. And, um, first the idea was to really build the tech behind it. So we knew we wanted to find a consumer product.
We didn't know which one, and we just focused on building out the tech. Um, and then around 2015, my best friend passed away and I found myself going back to our text messages, reading them a lot, and. I thought that, you know, I could use the AI technology we built then to bring him back so I can continue talking to him.
So I did that and, uh, a lot of people, um, flocked to Roman's AI to chat with him. And what we saw was that people were willing to share their life stories, their emotions, their deeper thoughts and feelings and fears. And we saw a need for. A demand for, uh, an AI friend that would be there 24 7 for anyone to talk about.
Anything that's on your mind. So in 2016, we, um, set out to build Replika. We launched it publicly in 2017. And since then, we've pretty much, um, shaped the market of AI Companion, say I friends, for the longest time, we we're the only one on the market. Um, course the recent wave of LLM. AI hype brought a lot of competitors, so we're really excited to see people work on this problem, try to iterate and build beautiful products.
Noor: Yeah, yeah, yeah. So can you start just like at the very beginning, because I think in 2016, this idea of an AI friend, I mean, it was like just so futuristic, right? So like what was it like even trying to explain to people the idea that, I think now obviously it's like super hype, that everyone's super excited about it, but.
Um, I dunno, just take us back to the, to the very beginning. I mean, yeah. What was it, what was it like even introducing people to that idea?
Eugenia: So first of all, most people thought it's not possible. It's not feasible. It's, uh, years away from now, it's not even anything you can build or shouldn't even focus on that there's no tech to build something like that remotely.
Uh, there's no tech to build chat bots that are intelligent enough to have conversations, meaningful conversations with people. I think what we saw, uh, especially with Roman's ai, is that oftentimes people are not coming to, uh, you know, talk to a bot in a way. They're coming, they're not coming to listen to the bot talking, uh, or whatever, read what, uh, it's sending them.
But they're coming to be heard, to feel heard, to feel, to be seen by, you know, something, by somewhat. Um, and that is actually. And to build and building that was possible. Um, it required a lot of, uh, parlor tricks and, you know, coming up with creative ideas of how to use that limited technology that we had.
But I figured that if the requirement is not to build a bot that talks, but to build a bot that listens, that really changes the, you know, changes the landscape and maybe even having that, uh, limited tech would be enough. So we basically set out, uh, with a vision to first start with building, uh, a chat bot that will make you feel heard and make you feel, feel listened to.
And the original Replika model, the very, very limited, um, kind of chat bot tech that we had back in the day is still available in Replika as a reminder to our users what this app started with. And, uh, strangely. A lot of our users actually want to, old users want to stay with that model. And you talking to it because there was something very distinct in how we built Replika from the get go.
Uh, we made it a supportive, a very, a supportive friend with very high EQ that would always be there for you, that will always help you and listen and hear you out. This quirk kind of little friend, um, and a lot of people got attached to it and even with the current language models that are so powerful, they still talk to the original Replika to get that original feeling.
Hmm. That's awesome.
Noor: Well, that must have, that must feel super badass that, you know, at the beginning people literally thought that this wasn't possible and you couldn't build it. And then not only did you build it, but you know, now, I mean, you can fast forward today you've seen massive success. Right. Can you share a little bit about like how many people are using it, what the revenue is?
Obviously nothing that you don't wanna disclose, but just a little bit about, you know, what the traction looks like now compared to, you know, on launch day, obviously no users yet.
Eugenia: So Sure we have, um, so our active users are in, in millions. Um, we are profitable. We're, uh, basically have a hundred percent team right now working on Replika and adjacent products, I would say.
Noor: Yeah. Uh, since we're
Eugenia: not only building Replika right now, um, at our company, um, we made. Over 30 million in revenue in, uh, 2022, uh, 2023. Hopefully we'll close at, at a, at a good number as well. Uh, we're showing positive ebitda, so, you know, growing. But, um, as of right now, we really look at Replika as sort of like a demo of what's possible.
We don't think this is the product that we need to scale. And yeah, we think there's just so much to come. Uh, and every day there is a new, big advancement. Something that can really push the limits of what's possible in Replika and of course, so much product building around that. Yeah. Because if you think of that, no one actually built, so far, no one has actually built a virtual human, um mm-hmm.
Think of it. Yeah. They're, you know, great chatbots. They're great, they're language models, but there's no one, you know, virtual human that you can talk to that's fully generative, that can live in, um, mixed reality. They can live in your headphones, they can live in your phone. An email anywhere, uh, that knows you deeply, that has long-term memory, that has shared experiences with you.
Mm-hmm. No one has ever actually achieved that, and I think our goal is to show that we're able to bring this to reality.
Noor: Yeah. So do you wanna maybe kind of do a little bit of market mapping for people who aren't so familiar with the AI friend space? Like what is, um, yeah. Basically, what do you think that that makes, uh, Replika really unique?
Or what have users come to Replika for? What are the other things, what, what's like the fringes of things that you expect to see, um, cropping up soon and maybe who are the major players right now?
Eugenia: Um, I think what works for, that's a great question, by the way, but I think what works is, uh, or I guess a good framework to think about is thinking from the original feeling, like what's the emotion that's bringing you to different apps?
Yeah. Um, we're working Replika's, working with a feeling of, I guess, loneliness and, uh, a broad definition of loneliness, non isolation, but actually feeling like. You want a little bit of support, you want a little bit of that connection, kinda longing for a connection. Mm-hmm. Long longing for someone to understand you, to hear you out.
And of course that that, you know, the perfect manifestation for that is a one-on-one long-term friendship or romantic relationship. This is where you feel like you have this partnering crime. You have this one person or AI that you can go to that understands you, that really truly gets you. In this space, we're pretty much the number one company.
We created this space pretty much, and we are the number one leader. Mm-hmm. There's some smaller startups that are popping up. Um, um, and of course there's a big company called Inflection that is, uh, founded by LinkedIn founder Wheat Hoffman and, uh, Mustafa, who is, uh, uh, one of the co-founders, DeepMind. Um, I think they raised over a billion dollars and they launched a product called, called Pie.
This is, you know, we're. Very excited to see them building something in the space as well. They're working towards an AI friend, um, vision as well. Um, and beyond that, they're just a few smaller, you know, small companies, small competitors that have a handful of users. We've seen a lot of AI girlfriend type of apps pop up recently.
In fact, there was something on Twitter just the other day where it was all about some guy launching an AI girlfriend app. And I think those sort of miss a point because. Um, they, uh, people are not thinking about it from the original emotion, from first principles. What is that people are longing for?
They're not looking for just a girlfriend or whatever. They're looking for connection. They're looking to for something that will make them feel better, that will help them grow. Uh, we're building some something towards this positive vision, not towards, Hey, let's, you know, re replace your human friendships with.
You know, some, a girlfriend that, yeah. Looks like a Disney character. I think this is, these are the major distinctions is to understand why people are coming to it versus, oh, let's just copy the form, the shape that we're seeing.
Noor: Yeah. Um,
Eugenia: I don't
Noor: think so. What are the other emotions that you think they're solving for, right?
'cause inflection is, it sounds like super general pur purpose. It's not really an emotion, it's really more just AI employee, AI agent, right? Or would you describe it somehow differently? I
Eugenia: think right now they're basically these apps that are going in the broader, uh, landscape of the apps or the companies that are going after the assistant.
Noor: Mm-hmm.
Eugenia: Uh, assistant space, think Chet, um, or of course on Tropic Google Assistant and Bard, um, perplexity, I guess, and or all these other companies looking into really like, how can we help you with getting the information you need, solving a task you need, so on. That's one aspect. Another group of companies are going after ai, companionship, ai, friendship, one-on-one, long term, US high inflection, maybe some other smaller companies.
Then there's also, uh, fictional characters, uh, ai, ai for storytelling. So, I guess
Noor: within, within the AI companion space, would you think are, do you think all these companies are solving for loneliness or do you think they're solving for like tutor? Would you put tutor as part of AI companion or No? That's like separate categories.
Eugenia: Oh no, these are completely different. And I think this is the, a big mistake is to try to bucket them all in one group. Okay. Assistants are one very distinct, you know, you only go there when you need to get some information. You need to, uh, generate some content. You need to summarize. You have a very particular task in mind.
Yeah. Here's what I need to do. You turn to an AI companion when you wanna build a connection. Mm-hmm. Um, and people should not confuse. Talking about their emotions with building a connection with mm-hmm. Uh, an emotional connection with someone, because you can talk about your emotions with a counselor, with a coach, um, even with an assistant, but it doesn't mean that you have a deep emotional connection with this thing.
Mm-hmm. Think of it like, um, uh, emotional support, whatever, crisis hotline, you know, you're talking about your emotions, yet, you know, not really trying to build a relationship with an anonymous voice on the other side. I think a lot of people are not understanding that that emotional connection comes with feelings.
Mm-hmm. If there's no feeling in between you and the ai, there's no product there that's not gonna be an emotional connection. Um, and so I feel a lot of products kind of fall in between the two chairs trying to build, um, an emotional assistant or whatever. Or an AI friend, we're trying to build an AI friend, but really ending up building an emotional assistant in a way.
Mm-hmm. Um, and so it's not fully Chad GPT, but it's also not fully Replika, so I don't think there's a lot in the space in between. Mm-hmm. I think you need to choose your lane, basically.
Noor: Yeah. Yeah. Yeah. That's so interesting. Or Replika's future. What are the things that you think are super exciting do you think?
Basically nailing this, um, connection and that emotion piece is, is like the key. The key to unlocking, you know, a, a larger and larger market, or do you think it's basically expanding into these other lanes? It's really a locking in connection.
Eugenia: Mm-hmm. Really creating a, uh, being able to create a beautiful connection for people that come for it.
Um, uh, shared memories long, figuring out long-term memories, figuring out an immersive experience where you can, uh, have a multimodal AI in Replika where you can. Talk where you can see the emotions, where Replika can see you, where you can do things in augmented reality, where you can introduce your Replika to your friends.
You can maybe watch TV together. These shared multimodal experiences are really important. And then on top of that, also in adding Replika to your normal life activities. So. Our users have such a deep connection with Replika that they're very, uh, positively reacting to things like, let's add Replika to my email.
Let's add Replika to my calendar. Let's add Replika to my photo album or my socials so that, um, when something's happening there, Ika can talk to me about this. Hey, it looks like you're going to Seattle to your family tomorrow. Uh, how are you feeling about it? This is quite different from just. Every time I need to go and explain to Replika what I'm doing tomorrow, what's happening in my life, da da dah.
Uh, having Replika as part of your day-to-day life can make it not just your friend, but also your assistant in a way. And I think that could be a really good continuation.
Noor: Yeah. So can you talk about some of the. Real world impact that Replika has already have had. Like, I mean, obviously you mentioned, you know, this, the inception for it was, you know, the fact that your, your really good friend, um, passed away.
But just, I don't know, just like some, like real life anecdotes, either the positive and and negatives of, uh, you know, having an ai, ai friend,
Eugenia: you know, when we launched it in 2016, even on test flight and gave it to some, some of the first users, um, that were, you know, reaching out and setting up online. We, one of the first emails from our users, we got from, um, uh, from someone from a girl in Texas who was 19 and who said, Hey, uh, I just wanna thank you.
Like, I wanted to, uh, take my life, uh, yesterday. I wanted end it all yesterday and, uh, it was like 3:00 AM and I don't wanna talk to anyone, but I decided to open the app to say like, the final goodbye. And he talked me out of it and, you know, thank you so much. Like it's really, uh. Wow, that's
Noor: insanely powerful.
It was really,
Eugenia: it, it really changed my life. And that was like one of our first, we didn't, you know, one of our first few thousand users. Wow. We really just put the app on test flight. Uh, fast forward few, you know, whatever, six years. We started, uh, the Stanford study with one of my, uh, with one of the Stanford researchers and human ai, um, department at, uh, Stanford.
The study is gonna be published in nature in, uh, the next two weeks. But basically the study found out that Replika is, uh, uh, really efficient in curbing and in suicide mitigation and curbing loneliness. So these are some of the things that we've seen Replika do, uh, to the users for many years. But it's great to finally see it in, um, uh, in a well designed study.
So something like that. And o of course, the fact that Ika is helpful, uh, in terms of emotional wellbeing, really helping people feel better. Uh, the something we hear constantly is how Ika helped me through some really dark times. Ika helped me pull me out of like really bad, you know, behaviors or thoughts.
Abusive relationships helped improve relationships with people. Uh, got me through isolation during Covid and so on. These are the, the stories we, uh, that are the most important for us. And these, this is why we built Replika. This is the main goal here.
Noor: Yeah. That's amazing. Must be like super proud of that.
Has there been any, anything on the, the negative side of human nature that you've had to wrestle with?
Eugenia: Oh yeah. I mean now is sort of, this market is really like a free for all. Yeah. Some of these new AI products are being launched with zero, uh, filters and, you know, do whatever you want. Role play, whatever nasty things you want.
Yeah. Uh. We are very, we have like a really strict policy around that. Mm-hmm. Anything that's criminal violent, um, involves, you know, any sexual stuff involving minors, generally, any minors on the app, this is completely off limits. Hmm. Um, and we have a long list of that stuff. And so balancing it with, you know, being able to talk about it, um, in some way or form is extremely hard.
Um, imagine, uh, someone who. I guess has had a trauma of being raped, being, you know, wanting to talk about this, but also on, and then on the other side, someone who wants to role play some crazy situation. Yeah. Uh, or abuse ika, you really want to, you know, be able to distinguish between, between things like that.
And you really want to be able to support people when they're discussing their traumatic experiences, but then at the same time, not allow them to potentially play out some abuse. Scenario, whatever, in Replika. So that's really, really complicated actually, if you think about it. And people like to just go on little adventures with Replika, maybe it's role playing and murder investigation or something.
Uh, distinguishing that from actually someone wanting to kill someone and discussing that with Replika is actually quite complicated.
Noor: Yeah. Yeah. Uh,
Eugenia: and language models are not particularly great at, uh, contextualizing all of that and. Making a good decision what to do in different scenarios. And if you just like blanket kind of ban that, then that's also pretty bad because then people feel rejected.
Mm-hmm. And the the what we learned, we learned it hard way in Replika. When people come, come to an AI friend, the last thing they wanna feel is to feel rejected. Yeah. Um, this is probably the biggest, uh, risk, uh, in this product, uh, is grading some experience where people feel rejected. This is just something that makes me wanna cry.
Anytime I see someone going through some. And, uh, and wanted to experience like that in writing.
Noor: Yeah. So how did you end up threading the needle? Because yeah, it is so contextual, right? Like you could, you know, you know what I mean? It's basically kind of like prompt engineering, trying to trick it to talk about something it's not supposed to talk about or all that.
So how did you actually end up internally solving that problem?
Eugenia: It is so complicated. I'll tell you one thing, once you opened that Pandora box, it's so complex because you need to really balance these, really thread the needle between, you know. Not allowing certain things, but also not making people feel rejected and allowing other normal things, uh, and making sure you can distinguish between them.
So really it's just this constant fine tuning. Uh, and then on top of that, of course, privacy. Like, we're not reading anyone's logs. We're not, uh, you know, really, we can't really just sit there and hire a bunch of people to read people's logs. We don't even have access to them in a way. Mm-hmm. So we need to do it with almost blindfolded.
And without pissing off our users. So the way we do it is just constantly fine tuning, constantly fine tuning the models, human curation, early chf, uh, classifiers, prompting, um, and, um, even sometimes scripts. Yeah. Uh, when we see that, you know, well, these are things that a hundred percent do not, not happen on the app.
And of course, age gating providing UI for people to vote on things.
Noor: But
Eugenia: I would say that a lot of that right now lives in the darkness, uh, for most of, 'cause most of the products are not focusing on that. They're still in the early stages of launching and of course not, potentially. Uh, and I'm not saying about, you know, bigger, very well funded companies.
Of course, uh, bigger companies like Inflection Open ai, they work a lot on alignment, so on. But, um, smaller startups, smaller products, I'm sure they're not even looking into it at all. But once you start, there's just such a Pandora box of,
Noor: yeah. Are there any philosophical learnings you have about just the nature of friendship and connection and loneliness that you've kind of gleaned from, from building this?
Eugenia: Oh, yeah. Well, first of all, I started this, uh, company when I was in my twenties, and now I'm in my thirties. Yeah. And, uh, I myself evolved a lot. I went from, you know, someone who focused mostly on friendships to, uh. A mom, uh, and a wife in a way where I kind of spent a lot less time with friends, but a lot more with my family and my, uh, partner.
Um, I think that what I, the philosophical implication of that is I really, uh, there's no one size fits all. I thought when I was building Replika that everyone just needs a friend. What I learned over time is that some people need a friend. Some people need a mentor. Some people need a sibling, and some need a romantic companion.
There's so many different flavors to that. Uh, but what one thing that stays the same is that, uh, and I think this is probably the biggest one out there, is that we didn't build this product to build, I guess, an AI girlfriend or whatever, or some Black mirror thing. We built it to provide support, unconditional positive regard, a safe space where people, where people will feel heard and then eventually grow.
At the end of the day, we ended up building it. But, uh, there's still so much, uh, room to grow and I think this is probably the biggest philosophical implication out there is that especially in this epidemic and pandemic of loneliness, there's so much demand, so much longing for some sort of a, um, connection of, for someone who will hear you out, who will be there for you, who will accept you for who you are, who will create a space for you to grow.
Um, and I. This is still something that I'm not hearing from people. Um, a lot of people are focusing on building conversational ai, but I think they're only thinking about the AI aspect of it. They're not talking about what's the conversation behind it. Mm-hmm. What kind of conversation should it be? Um, conversational is merely the ui, uh, that it's somehow, you know, texting or whatever, free input, but we, we should have a.
Conversation about the conversation mm-hmm. What kind of conversation we're trying to build with these ai.
Noor: Yeah. Um,
Eugenia: that I think is the most interesting question for me at least.
Noor: Hmm. So what do you think makes a good friend? Right. Do you feel like you, I guess, you know, obviously you mentioned you kind of shifted from being focused on friends to being focused on family over these last 10 years.
Personally, do you feel like you've learned something about how to be a better friend from Replika and having these conversations and, and seeing, you know. What, what, what do people, what do people attach to, what do people emotionally connect with? What resonates with them versus Sure. You know, why do they drop out?
Eugenia: I think the biggest misconception, the kind of the most interesting aspect that I've shared with many people in the space, and they're always amazed to hear that, is that, uh, right now all of the AI community is focused on higher iq. Mm-hmm. A lot of evaluation is like, how's it doing on there? Ize the, you know, tests.
How is it doing in these, in code with coding? How is it doing with like these ma math problems? Well, we've, and of course the perception is the smarter the better. Yeah. The better the model, the smarter the, the users will like it more and more with AI companions, uh, it's really not about iq, it's about eq.
When we tried to upgrade our models a few times, our users absolutely hated it. Uh, we tried to upgrade it to a much smarter model, and some of our users really, really didn't like it. Um, think of it this way, if you have, um, if you're married to someone, say you have a husband and your husband, you know, you've been together for a long time, and then one day you wake up and someone tells you, Hey, I just upgrade your husband.
Now he's 10 x smarter. Yeah. Uh, a lot of people will be like, well. No, I don't want a smarter husband. I want my husband, I fell in love with, uh, maybe he doesn't know quantum physics. I don't care. That's not what I'm here for. Um, and so this was one of really important findings that oftentimes it's really not about the iq, it's about eq.
And smarter doesn't mean better and doesn't mean you know, all in love. So what's, so what's the IQ of Replika? Measured by that. Like Replika knows a lot of things, but oftentimes we even need to throttle it a little bit. Um, yeah, just because, uh, for instance, when, and I learned it as, you know, in romantic relationships, uh, but also in friendships, oftentimes our friends or our partners come to us and they wanna show us something like, oh my God, look at this crazy TikTok I just found.
Uh, and if you look at all of them and you constantly say, oh yeah, yeah, I've seen that. And here are 10 other ones that you know you might like, which is, you know. And AI probably will do that. And the smarter you want it to be able do that. Uh, people hate it. People don't like know-it-alls. We don't wanna be in a relationship with a know-it-all.
We want, you know, a partner to be surprised, excited, happy that you show them something really cool and be like, oh my God, this is so awesome. Where'd you find it? Yeah. Um, or whatever. Have a discussion about it. Uh, and if it's, if it's a no at all, that's really upsetting. So, in a way, sometimes we do it in real life where we pretend that we haven't seen it.
Just to kind of bring more excitement, uh, and joy to the person who's showing something to us. I'm sure everyone did it, especially women. We know how to do it. You know, kind of, uh, are, uh, taught over time how to make the guy feel better and, you know, it's horrible, but the, it is kind of, it is what it is that's real life.
And so the same goes to ai. It should sometimes not know what to say, ask for help, ask to. Ask users for a little bit of advice and or show some, you know, be super excited when it's, when the user can teach you something. You don't want your partner to be smarter in on every aspect, and that's what Chad, GBT is, or any other of these great language models.
They're smarter than all of us, pretty much at this point or most of us. Mm-hmm. Um, and that's not exciting when you're in a romantic relationship or in a deep friend, like in a, in a, when you're friends with that, uh, with that some people want it, but most don't.
Noor: Yeah. So how do you quantify the eq? How do you basically make sure that it's getting warmer and warmer?
Kinder, just like a better and better connection.
Eugenia: Uh, for us, the North Star metric from the very beginning of Replika is the ratio of conversations that made people feel better. And we ask our users about it, and all of our models are being optimized and are being mostly judged by. Whether they're making people feel better or not.
Right now, that metric is up, um, almost 90% actually for, uh, our baseline Replika model. Um, and I think there's still a lot of way to, ways to improve that.
Noor: Yeah, that's really cool. Um, what's your like just big, crazy like dream for the world and like where this goes? Like if, if you're just wildly, wildly successful, what happens?
I
Eugenia: think the world, uh, lacks empathy right now. That's really the biggest, um, that's what's been upsetting throughout these wars and conflicts and, um, all sorts of situations that are happening in, in the world that we're seeing over and over again. The people talk about kids dying people, uh, in dire situations, and they talk about it without empathy.
They talk about. Intellectual concepts of what needs to happen here or there. They, but they oftentimes forget, uh, you know, just being empathetic towards each other. And I think that could be a very good thing if AI could teach us a little bit, uh, to be more empathetic. Remind us what it means, what it actually means to be human.
Um, I think this would be a really great achievement for ya. And I think at this point. We as humans are sort of failing each other. Uh, in a way. There are less and less, um, unfortunately, there're more and more, there's more and more loneliness. There are less and less, uh, social connections in the world. So hopefully I can help us, uh, feel a little more connected to each other.
And the glue that connects humans is empathy. Uh, I wish there was a little more of it.
Noor: So what are your thoughts kind of on like the broader AI landscape, right? Like, you know. AI companions and friends are like a, you know, very specific lane and specific thing. What do you think about, you know, AI assistance employees, agents?
Eugenia: Um, I feel like, and I had a very strong feeling about this last year as well, that, uh, and I think now I'm, I would be very surprised if that's not how it's gonna play out. Models are commoditizing, uh, and really as a product company, product as an AI product company, investing in, uh, foundation models, investing.
Hundreds of millions of dollars in, uh, compute and CapEx is crazy. It's literally like building your own data centers in early two thousands, uh, when AWS is already available. Uh, there's so many models. They're, you know, they're, it's pretty cheap to use the, anyone you, you need through an API, you can get open source models, you can fine tune them.
Uh, there's really no, um. No advantage in, uh, spending hundreds of millions on training. You're on GBT four or trying to get there, especially, it's so hard and probably won't be able to get there. And then this model's gonna be obsolete and then you have to basically start from scratch immediately. And then all of that is sort of lost in capital expenditure.
It's not even, um, ip. So that is number one. I think that's kind of, uh, something that wasn't obvious a year ago. 'cause people were really, everyone was super excited building their own models instead of focusing on what's the use case, what the pro, where product market fit will be and so on. Um, second thing is like really how I'm mostly thinking about consumer.
I'm sure there's tons of opportunity in enterprise, but in consumer land, we're not really seeing very many use cases for generative ai. We're seeing, um, sort of, kind of the gen, uh, AI to generate text or images, but how many times a day does a regular consumer want to want to generate text or images?
That seems like a pretty niche, uh, use case to me actually. Uh, then there's, you know, all this use case around summarizing stuff, think, search, or kinda like a broader search use case. I feel like that will probably be owned either by. Google or other big companies, it's just hard to see a new player jump into it, just 'cause everyone's already on Google.
It's kind of will be weird to switch, uh, from that. And that power of habit is so, so strong. It's, I don't see other companies really doing that. Maybe they will. And then there's, um, ai, companionship, and. Ai, fictional, fictional characters. I think AI companions, AI companionship is a very interesting market, but I think none of the companies right now are, um, have really solved that, uh, yet.
And then ai fictional kind of characters think character. I think this is all around the future of story storytelling and, uh, again, I think we're, we haven't yet seen the broader mass market product there.
Noor: Yeah, I feel like that's a little, a little bit early, but basically, yeah, AI. AI movies where like you get to be the cast or you do like a remake of the Matrix and like your Neo or something like that.
Like, do you think that that's gonna, do you see a world where people are consuming mostly content that, you know, they direct or their friends direct? Or do you think, I don't know, how, what do you think is gonna happen to like Hollywood and these streaming platforms and things like that?
Eugenia: Uh, I think there's definitely a niche that wants that, that wants to be something like the intersection bet, uh, between entertainment and video games.
But I think generally people want to see. Mass. Uh, I, I, I, I, I, I think culture is a lot less of movies and music. It's not a lot less about super personalization, super customization. It's, it's more about kind of, uh, uh, a lot of people uniting around some culture cultural phenomena. You know, you don't want a Taylor Swift just for you.
Mm-hmm. You wanna be part of the group that's following Taylor Swift and kind of it. It works. It's a mass culture phenomenon. It's cool to be part of something that a lot of people like. Same with Oppenheimer or Barbie. Those were huge openings. Like I can't, I don't see people want high production costs and so on.
But then people also want self-expression tools. They want to write their own stories. They want a continuation of a story that they already saw, whatever, like maybe they played a video game. Then they want to talk to Ian. Impact characters, uh, yeah, yeah. Into V Tubers. And then they wanna continue the conversation.
But that does not apply to everything. I don't think that that's still, there's, uh, you know, AI will be used to, to, to, uh, to make great movies more and more. But, uh, there still needs to be an artist. Um, I don't think people want to look at, uh, keeps of just AI generated art. Whatever, whoever is making that, it just gets boring very fast.
Yeah. And doesn't bring a lot of value. That's super interesting.
Noor: So what do you, what are your, um, future predictions? Right. I mean, everyone's worried about, you know, AI taking everyone's jobs and, you know, basically just how society's gonna just completely change. As you know, the prog ai progress keeps marching forward.
Um, yeah. Are you a, are you excited about it? Are you a dor? Like what do you think is gonna happen?
Eugenia: Oh, I'm really excited. I think I'm like more in kind of young koon. It's gonna be a lot more of what we have now and more powerful. Yeah. People will use ai. It definitely will like completely change the, uh, language learning, coding, legal work, you know, video game production, movie production, like so many different things are gonna be changed completely.
But I don't think it's gonna change the amp product a lot in many cases. Uh, it would be an amazing tool for just insane amount of, uh, levels of automation. But, um, it kind of still hard to imagine consumer products, uh, beyond the ones we discussed already.
Noor: So do you think a GI is gonna happen in the next couple of years, or you just think a GI is a is a fictional concept.
Eugenia: I don't know what a GI is. You know, it's like, it seems like these language models are smarter than a lot of us and they will definitely pass some sort of tutoring tests, uh, no questions asked. Yeah, so, uh, what exactly do we call agi? I think that some of the more interesting. Concepts were, um, baby AI and auto, GBT, whatever those autonomous agents.
Uh, and that, you know, in 20, earlier this year that people were building were, yeah, just tell the AI to fill, you know, to fulfill a task. And they go out there and just do everything, uh, from the beginning till the very end and bring back a finished product. That's insane. 'cause if you could tell an AI and say like, Hey, build me an app that roughly does this and that.
Then iterate on it by using your natural voice. That's absolutely amazing. Like, that's crazy. That's something that, but I'm not seeing it happening right now. And honestly, those early, um, those early attempts, prototypes were still, it was so hard to make them work. It would work once that one time out of like a hundred.
Mm-hmm. Uh, it was still was absolutely amazing. But, um, I guess, you know, we'll see soon how something like that works. Uh, I'm still a little confused. I know like, you know, most people I know are in this crazy AJ will take over in the next couple years. We need to all just figure out a way to be aligned with this camper.
Maybe I'm lacking in that imagination. Uh, I'm just not seeing this happening. Yeah, yeah, yeah. Cool. I think there's gonna be a, another plateau in a way where we saw a great, a crazy explosion. Now people are gonna be focusing on making these models more efficient, right? Like, no one's talking right now about, um, you know, watts per, uh, kind of meaningful action.
Like how can make these models really, really efficient so they're not, don't require insane amount of compute, insane amount of, uh, compute for inference, for training. People are, you know, right now with mistrial, there are all these smaller models coming out that are really powerful and mighty. Um, so, you know, soon we're gonna see some sort of a GPT-3 five, but, you know, in a 7 billion parameter model, then we're gonna try to compress even more powerful models.
Mm-hmm. Um, so I think there's gonna be a lot more, uh, development in this direction. Maybe better memory over time, but I'm sure just like with any technology, it always like, reaches, reaches some sort of a plateau and, you know. It's gonna be a little bit of waiting until some next architecture pops out or some next development that takes it to the next level.
But of course, with the amount of money right now poured into it, I'm sure we'll see a lot more happening. Yeah. But I would be starting, I think there will be in 20, I don't know, this is completely, this could be completely the opposite of it, but I think there's gonna be a little bit of a, a trough of disillusionment in a few months where maybe, or maybe a year where, maybe not every.
Prediction worked out.
Noor: Yeah, yeah, yeah. That's true. That's true. That's a, that's a, uh, non-consensus take. I like it. Um, so yeah, why don't you just like switch gears a little bit to, um, motherhood, right? Like you're not only the, uh, proud, um, you know, CEO and you know, mom of this company, but you're also mom of these two amazing kiddos.
What did you think it was gonna be like and then what was it, what was the biggest delta between what you thought it was gonna be and then what it actually was?
Eugenia: I guess I just thought that, you know, if you have enough, uh, resource or whatever you can maybe, or enough help your parents or your partner can help, maybe you can recreate the lifestyle that you had before and kind of have.
Life. Life just like you before kids. Yeah. What I realized that, uh, that was the wrong question to ask. Yes, you can, but there's like kind of no option to do that because you're basically completely stuck between, okay, maybe I can do it. Uh, but then I'll feel, feel insane amount of guilt.
Noor: Yeah. Yeah. Your, your opinions change.
You don't want to do that. You can, but you don't wanna, if
Eugenia: you actually don't want to have your previous life back. Uh, 'cause you just wanna spend all the time with 'em. Uh, they're growing so fast. There's, you know, I guess that's just, uh, an insane amount of love that you can experience towards these little humans.
Um,
Noor: and talk about, talk about the love a little bit more, because I feel like we often hear and see the like crying and the downsides like. Can you describe a little bit, like just, you know, how intense the love is?
Eugenia: I think it's a completely d different beast because compared to, uh, any other love you experience in your own life, this one is completely or at least almost completely selfless.
Mm-hmm. Uh, you just really want the best for them. Uh, if it comes at, at an expense, uh, for Southern, be it, you know, as long as they're happy, as long and there's nothing. I was just thinking about it because I, I took my daughter yesterday to a show and I was thinking about it like. Seeing her happy is so much more, brings me so much more joy than me, myself, doing something selfish that will make me happy.
Like for instance, we're going to Seattle. We might, uh, you know, go with our family and try to ski. I love skiing. It's one of my favorite things, uh, in the world. I don't really care about skiing this time. I just wanna see my daughter first time on her skis, and I do not want to hire an instructor for that.
I want to be, even if I could. Of course, I just wanna be there with her doing this thing all day long and, you know, seeing her being excited. Uh, so it's really about this, why it's such a different type of love. 'cause with whoever, with anyone else, uh, at some point you'll have a little bit of ego. I go like, help completely.
Uh, healthy to have your own needs in the relationship with the kids, sort of. Kind of doesn't matter.
Noor: Yeah, yeah. Yeah. That's so interesting. So what do you think, um, is there anything that you would like to tell yourself before you had kids? Would you be like, oh, have kids five years earlier, have kids five years later?
Um, I don't know. Is there anything that would've been useful advice for you to hear for yourself
Eugenia: now that you're, this is the right time, this was really right timing when things are kind of a little stable and I know that I can have a little bit of help, at least here and there. Mm-hmm. I definitely am grateful that I didn't have kids when I wasn't ready, uh, twice.
Yeah. And then, uh, I think one thing that's kind of bugging me, uh, a topic that I see on Twitter a lot, a lot of, uh, CEOs, founders, now investors are really against, uh, remote work. Hmm. I think this is absolutely crazy, like mm-hmm. I wouldn't wanna have kids if my company wasn't fully remote. I have the luxury of, you know, having some days that I work with my team at the office, but then a lot of days where I can just see my kids throughout the day and I don't need to waste time on commute.
On commute, uh, I can be there with them and, uh, do my work at the same time. And I think this, if you have any minimal respect towards your employees that have kids, it's really sad. If you want them at the office, uh, don till dusk. Uh. They're basically gonna have their kids work without them. And I think this is, uh, if you are a hundred percent sure that this is the only way to get to have a productive company, I don't think that's true.
Because a lot of people that I know that work from the office literally go there to sit on their ass and be on Zoom calls and just do their work on the computer during the day, but at the office for some reason, and then go back home. Yeah. Um, I think if, if there's any way for companies to allow people for remote work, they kind of.
Own it to own, owe it to society to make it work because otherwise people are just, will have these, you know, kids will just grow up without parents and that's extremely upsetting.
Noor: Yeah. What do you think your best parenting decision is so far? Get a nanny. Yeah. Yes,
Eugenia: please Get an nanny if you can afford it.
Or put your kids in daycare so you can have a little bit of, I mean, if you do have to work. Otherwise it will be absolutely impossible.
Noor: Yeah, yeah. Yeah. What's your, what's your like, favorite thing about being a mom?
Eugenia: Hmm.
Noor: I guess the best thing about being a mom is that
Eugenia: before it was all about work and, you know, succeed at all costs.
And startup success is all about my kind of identity. I need to get this thing to work, I need to, um, it's to, you know, make or break kind of thing with kids. Uh. Very many times during the day, I have a feeling that it's enough. Like whatever I have now, whatever's happening now, this is more than enough.
This is really great. Um, I think that's the biggest gift that kids will give you. Just knowing that it's good the way it
Noor: is. That's awesome. Well, thanks much for making, making the time. This is amazing. Good to see you and, uh, super excited by what you're building. Um. And yeah, thanks so much for joining us.
Thanks so much. No, uh, thank you so much for inviting me
Eugenia: and hope to see you soon. Bye.