The following is a rough transcript which has not been revised by High Singal or the guest. Please check with us before using any quotations from this transcript. Thank you. === fei-fei: [00:00:00] AI is a civilizational technology. We now know, uh, I think there's very little doubt now that AI's impact to our society is transformational. This has to do with jobs. This has to do with the way governments are impacted by ai. It touches on geopolitics. And how do we wrap this around? How do we work with lawmakers? How do we work with. Individual citizens. How do we make sure this technology doesn't tear our society apart? How do we make sure that we use the technology to increase productivity, but also to ensure shared prosperity? These are bigger societal problems that has to do with human-centered ai, so all these concentric rings of human centeredness. Are critical to to today's AI age. hugo: That [00:01:00] was fe Fei Lee talking about AI as a civilizational technology and why we need to center human values as we shape its future. I. In this episode of High Signal Duncan Gilchrist and I speak with fefe about a remarkable journey that spans physics, neuroscience, ImageNet, and now the frontier of spatial intelligence and 3D foundation models. We start with the early days of AI, when the field was still in a winter of sorts, and walk through how FEI FE'S work helped catalyze the deep learning revolution. We talk about curiosity as a North Star, what human-centered AI actually means in practice, and how her startup World Labs is reimagining how we interact with space sensors and machines. It's a conversation about science responsibility and how the next wave of AI might unfold, not just in research labs, but in society. If you enjoy these conversations, please leave us a review. Give us five stars. Subscribe to the newsletter and share it with your [00:02:00] friends. Links are in the show notes. Let's now just check in with Duncan before we jump into the interview. So I'm here with Duncan from Delphina. Hey Duncan. Hey, here we go. How are you? I'm well, thanks. So before we jump into the conversation with Fafe, I'd just love for you to tell us a bit about what you're up to at Delphia and why we make high signal. duncan: At Delphia, we're building AI agents for data science, and through the nature of our work, we meet with lots of leaders in the field. And so with the podcast, we're sharing the high signal. hugo: Totally. And we had such a, what I found such a wonderful conversation with fefe, which we're about to get into, but I was just wondering if you could let us know what resonated with you the most. duncan: I've spent my professional career treating AI and ML as really a performance lever, optimize a model, move a metric. Incrementally uplevel a business and fefe drops a line that gives me shivers. That AI is a civilizational technology. It's not a feature, it's not an [00:03:00] industry. It is civilizational jobs, geopolitics, the social fabric itself. Today's conversation takes the opportunity and the weight of this work head on. Let's get into it. Hey there, Fefe, and hugo: welcome to the show. fei-fei: Hi, Hugo. Thank you. hugo: Such a pleasure to have you here and. I feel you've had such a remarkable path that includes so many different vectors and types of work from research to entrepreneurship education, and I'm just wondering for you, what were some of the pivotal moments in that evolution? fei-fei: Thanks for the question. I think I'm very grateful for the, how my career has been and what are the, some of the pivotal moments. The first one is discovery. My first love, and my first love was absolutely physics. And as a teenager, early teen, actually probably around 12 years old, discovering the world of physics, it was simple, right? Just mechanics, optics, uh, electromagnetism. Just [00:04:00] open a door. That I think it really, that fascination, that curiosity. I. That whimsical of the world of science just remained in my life ever since. So that was a pivotal moment. Another pivotal moment would be that working in AI, as in the very early days of ai, my PhD, it was pivotal in a very private way because AI was nobody's, there was no fanfare of ai. The world didn't speak of ai. It was ai winter. In fact, the word AI was barely mentioned, but it was just discovering that there is a science that gets to the core of intelligence. It opens the door for me to study how intelligence works and making intelligence machines, especially visual intelligence machines, was incredible, incredible journey and especially the early days, the formative years. I that was [00:05:00] very pivotal. Of course ImageNet was a pivotal moment. It was, it spent for several years from the ideation to carrying it out to enduring the, the lack of reception to the moment of image, that challenge and convolution on your network and deep learning coming back to life with the image that, and GPUs, that entire journey is spanning for five years. Was was a incredible moment or long moment for me and we have to fast forward. I think at around the time of 2018 as a computer scientist and technologist and educator of ai, I had a epiphany moment that AI was no longer a private love of mine. It, my generation, including my own work, has brought AI to the world in a way that's. More transformational and [00:06:00] impactful than I had ever dreamed of. With great power, it comes with great consequences. It has become it a civilizational technology that would both give us so much hope and opportunity, but also bring consequences that are deeply, profoundly human consequences. And that was a moment. That I realized I should return to Stanford to build the Human-centered AI Institute to really study and research and promote the idea of human values. Absolutely in the center of the development of ai. So that was a moment for technologists like me to realize there, there's more to the science I love than the technology itself. And I'm not gonna do a laundry list. Last but not the least, this newest journey I'm on, which is being an entrepreneur and forming World Labs and working in at World Labs with my former students and [00:07:00] incredible technologists. And in, in today's AI age, especially gen ai, and creating a piece of technology and product that we feel the world hasn't seen before. That's really exciting and fun. hugo: That's super cool, and thank you for such a kind of thoughtful tour of, of so many things that have impacted you and impact you've had as well. And I'm really excited to get into human centered AI and, and spatial awareness and what you're up to at World Labs. But just to recap the journey from your first love of physics to computer vision and CNN's and ImageNet. To all amazing stuff happening today in the world of ai. It's a journey in, and then of course the human centeredness. It's a journey in complexity increase as well. Right. And of course the world of physics is exciting and and challenging, but it almost seems quite in terms of the complexity needed to interact with and think about these systems and then approaching the human centeredness. And I am wondering, the journey you've just described is. It isn't linear, [00:08:00] but it's a well-defined path now. And I'm sure it wasn't always obvious to you like what the signal of the path was, so I'm just wondering if you've had a personal North staff throughout the journey, or what type of things drove you and how you found the signal that allowed you to follow this journey? fei-fei: Yeah, thank you Hugo, for asking that question. A lot of young people ask me that question. In hindsight, it looked like. It was linear, but it's true, especially a journey of a scientist and, and I think so, is the journey of an entrepreneur. You're often in the path of darkness. You are, you're often in the path of uncertainty. There is a lot more that's unknown than known. And uh, recently I wrote a book, the World I See. And in fact, the very thesis of that book is about North Star. I would say that one absolute North star that has been guiding me as always is curiosity. I think it's just that is just such a human, so core to human value and to human creativity. And I have always been [00:09:00] very relentlessly or even a little bit naively cur courageous going after curiosity. 'cause I think that's just fun and, and it transcends individual, it transcends. Even the what's in front of us and that's a North star. And my curiosity has always been the science of intelligence and making intelligent machines. And that has carried me very far on my journey as that deepens, as my career of being a scientist deepens. I think another important North Star is, is really believing technology can be benevolent to humans. I think that conviction and optimism of. Believing the benevolence of technology for humans guides me towards doing things that that's human-centered and have have the kind of human value that that I believe in. And that's just another North star that continues to guide me. hugo: I love that and [00:10:00] I, a lot of people who I find interesting and doing a lot of worthwhile work in the space are driven by those two things as well. The curiosity I find interesting, I'd like to just drill down into that for a second because in today's landscape. There's so many things one could be curious about. There are new models coming out every week, new tools and that type of stuff. And can you be too curious or how do you decide what to focus on when your curiosity can lead you everywhere? fei-fei: Well, that's actually a great question. I suppose you can always be too in an other adjective there. So the reason I loved physics in hindsight was not just the Newtonian laws and the Maxwell equations and later quantum equations. It was actually the ability to ask audacious questions. Physicists of all scientists seem to be, to have this incredible appetite and conviction to, to ask the most audacious questions, such as, what is space type, the boundary of universe, the [00:11:00] smallest particles, and then you know how to unify forces. Some of these questions to this day don't have answers. I think that combination of identifying an extremely audacious question that is so hard to solve or to find answers for, plus it gives the a vector direction so that you can explore in many ways. But when there is a vector direction, it's all in the words of physics is almost like a field. Once you have a field, you can, your curiosity has a direction to, to align with, and that's how. At least for myself, having been trained in physics, in ai, I'm drawn to audacious questions. I'm drawn to big problems that no one has solved before, and then I let my curiosity fill that journey towards that goal. hugo: Beautiful. So what I'm hearing is curiosity conditioned on a question. I joke that I'm probably a basian [00:12:00] because you can never say you are abassian. So I think in terms of conditional. Probability is also, I totally agree about physics and in the previous life I was a pure mathematician and people thought that was a bit out there, but I always said that we were always playing catch up with physicists, right? Like they would do things mathematically that made no sense, like direct delta function, right? Then the mathematicians had to catch up and be like, oh, let's formalize this. So I totally agree with respect to. Physics mindset as well. I'd love to jump into human centeredness, which which you spoke to and I know is a deep concern and aspect of the work you do. So I'm just wondering if you could define or give us heuristics for how we can think about human centeredness in the context of AI systems today and what it could mean. fei-fei: Yeah. To me again, human centeredness is yet another North star for AI is that AI can be. A family of technology and as well as products and services. But that North Star for me is benevolence for humanity and for individuals. So that's [00:13:00] how I see human centeredness. And of course that would be guided by the values that of the society we live in. And also there is a, in my head, I visualize three centric circles for human centeredness. The innermost is individual that we want to create technology that help in individuals that empowers people, that that respects the dignity of people. I do a lot of healthcare work at Stanford, especially about using smart sensors to, to help aging population and chronically ill patients, to, to live better and to catch these clinically. Relevant moments that might lead to consequences if they're not cared for. And even with that best of intention, AI technology can actually unintentionally step on boundaries. That is, that actually is questionable for our values, whether it's [00:14:00] privacy or taking away individual agencies. So we need to be very aware when we develop technologies like this, how to keep that. Individual value, individual dignity, and the respect for individual in the core of this, and that's the individual layer. Then a little bigger in the middle of the concentric circle to me is community, right? Groups of people come together and we form communities, and AI is a technology that can be very powerful in helping communities. For example, creators in today's gen AI era, that they create a lot of contents. Generative AI now can create contents. What is the relationship between generative AI and creators? Do we? How do we empower them? I absolutely believe that we are here to augment people. We're using technology to empower people. We're not here to take away creativity. We're not here to take away what is [00:15:00] properly belonged to artists and creators, and these are the values as well as issues that we need to grapple with. A human-centered way. And last, but not the least, society is AI is a civilizational technology. We now know, uh, I think there's very little doubt now that AI's impact to our society is transformational. This has to do with jobs. This has to do with the way governments are impacted by ai. It touches on geopolitics. And how do we. Wrap this around. How do we work with lawmakers? How do we work with individual citizens? How do we make sure this technology doesn't tear our society apart? How do we make sure that we use the technology to increase productivity, but also to ensure shared prosperity? These are bigger [00:16:00] societal problems that has to do with human-centered ai. So all these. Concentric rings of human centeredness are critical to, to today's AI age. hugo: Absolutely. And I love how you frame it of these concentric circles from the individual to communities, to society at large. And I do, I totally agree that we're seeing such a foundational and fundamental change. And I'm not the first to have said this, but I, I do think. If development on foundation models, for example, stop today, which it clearly won't, we'd still be figuring out for decades applications and how to use them. And I totally agree with your assessment on trying to figure out how. They can help larger scale organizations work. 'cause I, my hot take, I think is that generative AI tools at the moment are wonderful for small teams and individuals, but larger organizations haven't figured out how to incorporate them in, into their processes so they can actually slow them down at the moment as, as well. I love the examples you gave, and I'm excited to get into spatial awareness and sensor-based and perhaps iot stuff as well. But I'm wondering if you could [00:17:00] share a few experiences that most strongly shaped your human-centered approach to ai. fei-fei: I'm an immigrant and, and I think that is by itself a very profound experience, especially I, I moved from China to New Jersey in when I was 15, and that was a extremely formative year. And in my book I did talk about being planted into a new society with a new language and is a new culture, was extremely shocking to an extent traumatizing, but also a very new experience for a teenager. And what I feel very grateful is that people without the outside of my family have. Given me the warmth of humanity have shown me light in, of kindness and compassion. And one family I particularly mentioned in my book was my high school math teacher, [00:18:00] Bob Sabella, and his family and how they have really extended their hands out to, to help me, my family, help with really a teenager. And that kind of compassion was. A seed of the most beautiful human values that was planted in me when I was young. And I also feel along the way in my career, especially early years of AI was such a new field, everybody was so curious. My, my mentors, my professors, my colleagues, and there it was just a, a world filled with curiosity and support. Last but not the least in my book, one person I did mention a lot is my mom, who is a very strong woman, but who is also physically quite ill for decades, and we have a symbiotic relationship on the surface. I, I take care of her. I'm her translator. I'm her caretaker. I'm her medical [00:19:00] case manager. I have been through every single healthcare medical scenario, from ER to ambulance, to surgery, to ICU, to hospitalization, to you name it. On other hand, she's also such a strong woman. She showed me the kind of conviction. Especially as a mom for my passion right in, even in the hardest days of immigration life, especially financially and medically, she was more unwavering than me about my love for science, my passion for ai, my my passion for being a scientist, and that kind of light, that kind of light of conviction and passion and unconditional support also instilled. Me the kind of values that I care so much. So these are just examples. I think all in all, it's, I have seen the [00:20:00] most beautiful part of humanity, the compassion, the kindness, and I believe in it. hugo: Absolutely. And thank you for sharing such personal stories as well. And some things. As you know, I'm, I'm back in Australia, but my dad's 85. I'm an early child, which is one of the reasons I moved back. So I can relate to a lot of these things and this is something I reflect on, but something I'm hearing in there is. This tension between the frailty of the human body as we get older and the incredible robustness and energy of the human spirit as, as well. And how, how those can be combined. And it does make sense for me now. Of course it did anyway, but why you are particularly interested in elder care as well. So that actually brings me to another question, which is what application areas. Like elder care or climate, do you think especially highlight the importance of the human centered approach to ai? fei-fei: Yeah, Hugo, frankly, AI is so horizontal that I genuinely believe it's almost all areas. Of course, it's very illustrative. In medicine, and [00:21:00] like you said, because of my own personal experience, I especially care about human healthcare delivery and helping the vulnerables, and whether we're talking about NB smart sensors or future robots or just. Way better. Diagnostic and tools. These are all gray areas. You also mentioned the sustainability and climate. AI has incredible opportunity to help to map out our bio-diversity, to understand our ocean, to model weather and climate, to also help us discover new forms of energy. Two years ago, the breakthrough in fusion Was largely a result of in improvement of machine learning methods in, in the national labs in America. But there's also, for example, education. I'm so excited to see that education, our human education system hasn't changed, especially the system that's developed in the West, which is predominating. The whole world now [00:22:00] has stayed the same for. More than a hundred, 200 years now, especially the, the early 20th century structure of education. But the way information is encoded and transmitted and distributed has drastically changed, right? Like we now have computers, we now have internet, we now have ai. So I actually think. Gen AI is really a wake up call to education system, and this is not just K to 12 at our fingertip, we have tools to do lifelong continuous learning, and that's another really important human-centered example for AI's application. I also think there are very unsexy or less celebrated examples of ai, for example, agriculture. Agriculture is actually critical for global wellbeing, but how to make it, uh, more efficient. How to [00:23:00] help humans to, to alleviate the hard labor from humans, right? These are, these are profound changes that AI can help. And last but not least, I also wanna call out government itself. Globally, every society is dealing with governments. And for governments to be more efficiently serve their people is better for everybody. And AI is a huge opportunity in terms of using technology to help serving people. And so there every single industry we can find human-centered AI examples. hugo: Absolutely. And I love all those examples. I work a lot in education and I think that's such a key example. And I'm very excited about the future of personalized education as well. 'cause a lot of education still has been broadcast mode, right? And not speaking to individuals. So I think that's an incredibly rich terrain that I'm excited to explore. I also [00:24:00] love this conversation is veering towards what I would call future music, right? Because we're talking about all these applications, which we're starting to. Discover and what we've had so far is incredibly exciting. We've had our quote unquote chat gt moment before that. Of course we had our, for lack of a better term, stable diffusion moment, which I think is as important, although not as flashy and culturally discussed as well. But there are so many exciting things on the table and you've hinted towards some of them, such as sensor-based technologies and even robotics and spatial awareness. So I'm, we're now just, it almost seems CBT seems quite in some ways that, oh, you can talk with. Software now F fantastic. But there's so much more, so many more rich opportunities on the table. I know spatial awareness is something you think a lot and work a lot on, so I'm wondering if you could give us a brief introduction. What is spatially aware AI and why is it important? fei-fei: Yeah. I'm really excited by what I call spatial intelligence. I see that as a huge part of intelligence as a whole, [00:25:00] or whether you call it AI or a GI. Because, uh, understanding 3D space and to be able to interact in it and to be able to create and innovate and do a lot of things in it is fundamental to animal intelligence, especially human intelligence. And it will be fundamental to computers and robots and virtual agents and all that. So that's the global umbrella of spatial intelligence. What's fundamental about spatial intelligence is. 3D 'cause space is 3D, and to be able to model 3D space so that you can create worlds that are three DA mathematically, 3D opens doors that have not been opened. For example, alien creator knows that in order to truly create. They need the kind, kind of controllability and consistency in the process of [00:26:00] creation, whether they're designing the furniture. Or interior arrangement or creating a film or creating a marketing material or just having fun or the kind of e-commerce that we're seeing. All this, the creator's needs incredible control and spatial intelligent AI can really help to, to democratize that technology and lower the, the energy barrier in, in creation. Another example is that so much of the global market involves interaction with different space. Of course, if you have kids, you can naturally cite the example of gaming, and it's true, right? Gaming is extremely interactive, mostly 3D space. But there's more than gaming. You working education, whether it's professional education or K 12 education, a lot of educational experience. To understand something, imagine teaching the kid a solar system. It'll be so much easier if it's 3D. Of course, [00:27:00] kids today in class can make physical solar system props, but that's just one example to show that if we have a digital virtualized ability, virtual worlds that we can create, that's 3D to interact with. It'll open up opportunity from gaming to interactive experiences, to professional training of whether it's sports or learning to be a surgeon or cooking omelet or whatever you can think of. So that's, that's another area. Last, but not, not least, I'm just making three examples, is that we are very excited by the future of robots and they're not just humanoid, alien machine that can navigate and. Do things in a complex world so that they can help humans. You can call it some type of robot in including cars themselves. Hmm. In order for robots to navigate the world and to be able to help doing things in the world, whether it's changing light [00:28:00] bulbs or in a warehouse lifting things or saving people in natural disasters, all this requires spatially navigating, understanding what's going on. That is spatial intelligence. So all these examples tell you, there's, like you said, Hugo, it's beyond language and it needs a different language. And that language of nature, that language of space is, uh, spatial intelligence and 3D representation. hugo: I love it and I love all those examples. There's an example, I've got a lot of friends who are architects, and this is a toy example for me, but I think it, I think it's a paradigm of. This philosophy, software in architecture took us away from being in the real world with CAD and that type of stuff, right? But the ability of AI and spatially aware AI to actually take us back into those spaces while still working with software, I think is in incredibly exciting. So it's computation that allows us to, once again, get in touch with the real world. Again, I'm very excited with. How multimodal models are developing. One example is Gemini [00:29:00] 2.5, which I find fascinating in, in, in all honesty. And the way it isn't a classic LLM, but it has a lot of LLM capabilities, but it has, I don't know how it does it, but maybe it has some clip pre-processing or something to do image analysis and then create images. And I do wonder whether you envisage your future where there are foundation models or other forms of models that are LLMs and vision models all combined and spatially aware and perhaps robotic as well. fei-fei: Absolutely. I think there is going to be foundation models that are more and more sophisticated. You know, my company, world Labs is developing foundation models for spatial intelligence and 3D world generation. And if you are talking about one monolithic giant model that combines everything that I, is an interesting way to think about it. I'm sure some people would try it. It is gonna be very resource intensive, especially. Data and and compute. But that's almost like what Einstein wanted to do is to unify all the forces. [00:30:00] It's almost has that flavor. So it's a great intellectual hypothesis. But before we get to that monolithic giant single brain ai, which humans do have that. We are gonna see more and more different foundation models on different multimodality with a different focus. hugo: Super cool. And I actually, I'm a hacker at heart, so it's something I love about the AI space is that at least recently it's made us return to Unix philosophy, modularity, composability of different models and that type of stuff, which is super fun. I am interested in. We still, there aren't really big public conversations around spatially aware AI yet, so I'm just wondering if you could help us think through what are some of the practical implications of spatially aware systems that you think might be underappreciated? fei-fei: I think one of the most underappreciated thing is that 3D is a language that. Is for computing is for programming because we are seeing a lot of pixels being generated and they're [00:31:00] beautiful. But one of the things is that if you just generate pixels on a flat screen, they actually lack information. It's very hard to measure to. The distance between two pixels on a flat screen is fundamentally different After. Distance between two pixels in 3D world, and when you put them on the flat screen, there's so little compute you can do with it, right? How do you add shadow? How do you change camera angle? How do you do occlusion? How do you relight? How do you measure? How do you drop something in? How do you take something up? All this becomes really difficult. So I think once we have spatial intelligence in the two 3D sense. This will change and I'm very excited by that. hugo: Awesome. I am interested, I know I'm not asking for any secret sauce or any, I don't wanna pry too much into World Labs 'cause I know a lot is under wraps. But I'm wondering what you can share about your vision for large scale world modeling and what motivated the work at World Labs?[00:32:00] fei-fei: I think what motivated me are two reasons. One is I think that there's just so many use cases we, we already just touched on, right? From creativity to. Experiences and interaction to robotics, to education, to healthcare, uh, to manufacturing, to agriculture. There's just the use cases abundant. If you look at the global market coverage on media and entertainment, on, on gaming and emerging technology like pr, vr, XR, and robotics, that that, that's just exciting. In the meantime, it's also intellectually and technically exciting that the world deserves world models and spatial 3D spatial intelligent world models is a fundamental missing piece for, for the generation, for the age of gen ai and, and I see that as a great opportunity. hugo: Fantastic. I am interested if without going beyond anything that's [00:33:00] already public, are there any use cases from World Labs that can help illustrate this direction? fei-fei: So I think I did mention all these, right? Creativity, creator, space, interactive experiences, robotics, these are all use cases. hugo: Mm-hmm. Fantastic. Moving on to other areas of ai, I'm wondering what developments in AI genuinely excite you right now? Things that you think are moving the field forward. fei-fei: I think open source excites me. I think there is a global movement now on open source and that really fertilizes the field even further. That's one global trend that I'm excited by. Another global trend I'm excited by is that the opportunity to use AI to superpower scientific discovery and, and that especially can and should be happening on our campuses, university campuses, because there is some people might have this doom and gloom conjecture that in the age of large resource of ai, like chips [00:34:00] and data. Higher education and universities have no role to play. And I actually strongly disagree because I think higher education is where shoeing blue sky, curiosity driven research continue to happen, and more pragmatically, there just so much more interdisciplinary work that's happening. Whether it's clinical, medical research, or biology or psychology or astrophysics or civil engineering, any discipline you, you just, any departments you mention on university campuses, you realize that AI can be a tool for that. And using AI to help those discipline to do scientific discovery and innovation is a huge opportunity that I'm truly excited by. hugo: I couldn't agree more. And for example, like I use a lot of agentic systems to go and do simulations for me and like you can send out agents to do all types of stuff. And [00:35:00] actually I know this is something Duncan thinks a about a lot and of course works in at Delphia. So Duncan, I'm wondering if you can add any color to how you think about Agentic systems going out and helping us do the work. duncan: I was actually gonna ask Fafa the follow up there, which is, it's clear that academia is so long-term oriented and really invested in kinda the deep progress. And in, in today's world, right? We have so much quick reaction on the Twitter sphere and LinkedIn. So how do you personally draw the line between real progress and in today's world and the hype? fei-fei: Great question, Duncan. So first of all, I have, I've, my life has always been guided by North Star and that does help me because if you'll understand the North Star, you can use that to measure against. Hype or sometimes you see something really incredibly real and that's, that's one North Star achieved. But that is always my reference system. I also think [00:36:00] respect for knowledge and expertise still matter. Just because someone is on Twitter yelling at a global scale sometimes doesn't necessarily mean they have the deep expertise and the knowledge. So I still respect. Where the source of the voice come from. This is actually really, and Doug, it's a very profound question in the age of chat, GPT, in the age of ethnic AI, information is everywhere. How do we teach our kids or even the public to decipher information to, to guard against this misinformation night? We haven't talked about concerns about this. One of my biggest concern in this time, in this age of AI is the lack of good public education. I think people, including governments themselves, for their own purposes are [00:37:00] speaking AI with sometimes veiled, sometimes not veiled agenda, and that has created a bit of a vacuum. Maybe that's too strong a word. Some people are trying, at least every H AI has been trying, but really pretty much a vacuum of good public education, trusted objective public education of ai. And that does concern me because you and I and Hugo might not be subject to, might not be as vulnerable because we are deeply educated. We're privileged to live in regions of the world that are, we can access information. But that's not true for everybody. And that is AI exacerbates that problem and we need to be very careful. duncan: That's, that's a beautiful response. And I guess that education piece feels like a really key factor in the health of the AI ecosystem going forward as we think about our community and our society, taking [00:38:00] advantage of AI education is so important. I'm curious if you could speak more to what else are the key factors or key pieces of a healthy. AI ecosystem and other open source plays a part academia. I was curious to open that up a little bit. fei-fei: Yeah. Doug, I think as the word ecosystem indicates ecosystem has to be multi-stakeholder. Ecosystem is more than win-win. It could be multiple wins, um, in a healthy ecosystem using us as an example, right? Especially in the post World War ii. The government has played more or less a relatively positive role in injecting resources into the ecosystem of both public sector and private sector to actually turn the wheel to create a healthy ecosystem of technology innovation. That's why today. All the AI advances, we see some fundamental advances from microchip to internet, big [00:39:00] data to neural network algorithms you can trace back to, to decades of of research. So having a healthy ecosystem is so critical. It's not only just about resourcing, it's also about people, right? A healthy ecosystem is where people gets educated, where people gets jobs, where people return back. To to, to the ecosystem, and that's really critical and I'm actually concerned about that. I've been public about this because AI has accelerated so fast. Much of the resourcing now is not only in private ecosystem, private company is actually concentrated in very small number of private companies. And that is not healthy for the overall ecosystem of innovation and education and a long-term health of the society. hugo: So I, I love that you frame it in terms of long-term health and I am wondering how you feel about the long view on ai. For example, [00:40:00] we are in the early stages of AI and I think one. One analogy I like it breaks down in some places is in the early stages of humans able to harness electricity. We didn't have a light bulb, we didn't have a grid. And Edison I think set up the innovation lab in order to figure out how to harness these technologies. So we are in those early stages. I am interested when we look back on this era, do you think AI will be viewed like the internet or. Computers or the printing press or something bigger like fire or or literacy In writing? fei-fei: I think AI is electricity and computers combined. First of all, I think AI is the new compute. Anywhere there is a chip, whether it's a light bulb or it's the engine of a, of a airplane or it's a robot or it's a fridge. Anywhere there is a chip today or tomorrow, there's compute. Anywhere there's compute, there's gonna be ai. AI is from a software point of view, it's just a [00:41:00] more intelligent form of compute. So that to me is very clear. I call it electricity because it's very horizontal. This computing technology empowers everybody. So it is a fundamental infrastructure that our society should provide, and we'll see how it plays out. It's very early. The, it used to be it's just private companies and some universities participating, but now nation states are taking very strong steps in resourcing and every country has their own policies. But it's one of the biggest changes since 2022, really late, early 2023, is the incredible attention that AI has received in in the policy world. duncan: That's beautiful. Following up on that, what kind of guiding principles or heuristics do you think are most useful for technologists in navigating this [00:42:00] crazy fast advancement in ai? fei-fei: Duncan, I'm gonna ask, answer your question in two ways. One is technologists. One is actually a guiding principle for policy makers. For technologists, I so deeply believe that people like us train as technologists, as scientists. Our fundamental value is to seek truth, is to follow facts. We're not truth itself. I do worry that as AI technologists, especially at as someone who is privileged to have a platform to speak, that sometimes we take ourselves too seriously as if we are truth itself. I think we have to keep that humility. We are trained to seek truth and to stay as humble as possible in the phase of technology and science and humanity, and stay true to, to really why we started this. I, I started this because I was a curious [00:43:00] teenager about how the world works, how nature works, how intelligence works, and then that continues to excite and humble me. I think it's important because sometimes the hype of AI also creates unnecessary, unnecessary ego. So that's for technologists ourselves, for policy makers. I have actually publicly also written about this. I think there are principles we could follow to, to think as well as act thoughtfully about, about ai. First is science, not science fiction. I really am deeply worried when policy making or policy makers are motivated by hyperbolic science fiction stories or conjectures and make policies or laws even out of that. For example, there was a lot of hyperbolic statements about AI causing human extinction. A year or two ago, [00:44:00] and that has, I have actually watched and cringed when I watched that global governments, important governments would be issuing policy under that premise, and that is not healthy. We need to be so respectful to data, to measurements, to scientific facts, and that is very critical for AI policy making. Second is the pragmatism, not ideology. I think one of the biggest debate is how much do we regulate the upstream in our research versus downstream applications? I deeply believe by enlarge, we should take a pragmatic approach and look at where rubber meets the road and pay attention to the downstream application. If we worry about. Healthcare AI devices. Then we look at what are the current framework for this under FDA medical devices, under FDA, and update it with ai, with new AI knowledge, [00:45:00] or whether it's finance, environment, transportation. We can update our regulatory framework with the, with the new knowledge, but not to choke off the upstream innovation. This is almost to say. Cars did kill people and cause injury, but instead of introducing seat belts and speed limit, we shut down all the car companies in the early 20th century that get back to horses. That's just not the way the society should progress. Last but not the least in the framework of policymaking we already mentioned is resource. The ecosystem invigorate the ecosystem. Not only allow big companies to thrive, that's important for everyone. They're large employers, they're big taxpayers for our governments, but also allow entrepreneurs allow public sectors and especially ska University research to also thrive because that's for the long-term [00:46:00] health of the society. hugo: Thank you so much, fei. FEI for that. I think wonderful sense of the type of direction we need to head in to have a more healthy a AI ecosystem for all stakeholders. And I love the example of cars as well. And as we know, seat belts were first developed with Made for Men in trials, right? And then we developed systems which were better for everyone. But I think that example really speaks to the actual need to consider in a robust and principled manner. All different people and all different stakeholders in, in the systems, what we're building and the focus on education and the developer and open source ecosystem, but also at a policy level and helping with public awareness around what's possible and demystifying what the hype is and, and what, what doesn't work. So I just wanna say it's time to wrap up, but thank you not only for all the work you've done and continue to do o over the decades, but for your generosity and wisdom in coming in and sharing your time with us and with our audience as well. So we appreciate it a lot Fe Fei. fei-fei: Thank you Hugo and Duncan for your very thoughtful questions. I enjoyed talking to you. Thank you. Thanks. [00:47:00] Thanks so much for listening to High Signal, brought to you by Delphina. If you enjoyed this episode, don't forget to sign up for our newsletter, follow us on YouTube and share the podcast with your friends and colleagues like and subscribe on YouTube and give us five stars and a review on iTunes and Spotify. This will help us bring you more of the conversations you love. All the links are in the show notes. We'll catch you next time.