Tech Transforms Podcast Guest: Janet Kang, Just Horizons Alliance Host: Carolyn Ford Carolyn Ford: Welcome to Tech Transforms. I'm your host, Carolyn Ford, and today I am very excited to welcome our guest, Janet Kang. She's a former tech entrepreneur who's now steering the ship in the nonprofit world, helping organizations rethink how AI is built, deployed, and led. From founding startups in Silicon Valley to being a champion of ethics in emerging tech, Janet's journey is anything but conventional. This episode includes a healthy dose of inspiration, some bold career pivots, and a vision for the future where innovation actually puts people first. Janet, welcome to Tech Transforms. Janet Kang: Thank you for having me. Carolyn Ford: We were talking a little bit before I hit record and I'm so excited to dig in here. I want to start with your origin story. You've navigated the startup world, youÕve founded ventures, and now you lead in the nonprofit space, all while pushing for a more inclusive and ethical tech industry. So whatÕs driven the evolution of your career? Janet Kang: Yeah, it's been quite the journey. And you know, as much as I hate to admit it, I've been an entrepreneur all my life. In fact, my father was also an entrepreneur, and that's actually what got me to really not want to be an entrepreneurŃthe ups and downs. But I started my first company when I was 13 years old. And then after college, I went on to start two additional companies, largely in the education technology space. Most recently, I was at a corporate venture capital firm out of Silicon Valley, where we incubated new products, services, and companies every six to twelve weeks. At that pace, and with the variety of incubations we were running, we were one of the earlier teams using AI not just for prototyping, but for actual product development. I had the privilege of seeing, really firsthand, what AI deployment looked like five, six, seven years ago. It was very raw and honestly, it still is. But because it was so productive, so functional, somewhat magical, there was a lot of pressure to continue to use AI. Because it was fast, because it was high performance, it allowed us to scale very quickly. And I often felt that, wow, we're really taking a big risk here. There were many times I pushed back to make sure we weren't using AI for the portions of the product that were higher risk, higher consequence. Carolyn Ford: So I want to pause right there. You said, ŅWeÕre really taking a big risk.Ó What made you feel that? Like, what did your gut say that made you think, ŅWe need to slow downÓ? Janet Kang: Some of our initial testing. When you pressure-test this thing, it behaves in very different ways. YouÕd even hire a consultant to do some initial testing, and it would be different when I put in a different profile. This was a little earlier in the dayŃsome of that is much better now across the industry with the larger modelsŃbut it also required creative thinking. You have to think: Let me imagine very different user journeys, not just your average use. What do the failure modes look like? LetÕs run more adversarial prompts. And I donÕt imagine everyone thinks the way I do. I want to hope every product owner, every product lead thinks this way, but thatÕs where I saw the risk. ItÕs one thing to do this for a use case that is for working adults, for example, where you have a certain level of maturity, a certain level of decision-making. You can see right from wrong pretty easily. When that age group goes down, that risk goes way up, right? The conversations that younger users can have with AI, the interactions youÕre exposing them to. I was fortunate that most of the use cases I worked on were with adult users and could be more in the entertainment industry. But you could very easily see the riskŃand how we really didnÕt have the right tools to manage it. YouÕd look all around you and everyoneÕs using AI and youÕre going, Wait, am I the only one seeing this? ThatÕs a big part of why I made the hard pivot in my career. What I wanted to see in the world is a counterbalance. Because I am very pro-AI. I really think this is transformational. IÕve seen it being used. I use it every day. But where is the counterbalance to all of that, where weÕre working on the infrastructure around responsibility and accountability? If weÕre investing so much into powering AI development, there has to be some solutions around keeping this safe, keeping it ethicalŃespecially for more vulnerable populations and use cases. So perhaps thatÕs part of my origin story to get to where I am now: being able to see it from the builder side of things, from the tech side of things, and then understanding that there really isnÕt a solution to keep this safe in a way thatÕs actionable today. Carolyn Ford: Right. ItÕs so huge, and the bell has been rung. How do you contain that? So what are you doing right now? Tell me about your nonprofit. Janet Kang: Thank you for the question. I run a nonprofit called Just Horizons Alliance, and weÕre working on open protocols and frameworks for ethical AI developmentŃand also in the early stages of conceptualizing actual technical tools you can use to see and assess risks in real time when it comes to AI-powered products. WeÕre in the early stages. I started this about five months ago, but IÕm really encouraged. A) Most people in the industry understand the problem. They agree that we need a solution. B) There are a lot of smart people that are just itching to get together to build a solution. So IÕm right at that cusp where weÕre starting to build out the initial framework, to shop that around with some experts before we publish it out into the wider world, and working with some charter partners to start auditing AI products for various ethical standards. More to come on that, but IÕm hoping that this time next year weÕll have some really exciting solutions out in the market that can help people make better decisions around AI development. Carolyn Ford: Why nonprofit? Is it that you just want what you develop to be open and available to everyone? Why the nonprofit angle? Janet Kang: Yeah, itÕs a great question. A lot of people ask me that because, in a way, you could say itÕs easier to go the for-profit routeŃand thatÕs the world IÕm most familiar with. ThereÕs nothing wrong with that. But the late Charlie Munger saysŃand I really love this quoteŃŅShow me the incentives, and IÕll show you the outcome.Ó In the for-profit world, the name of the game is growth. You have a fiduciary duty to your shareholders to get the numbers to go up. This work is very different. It requires a lot of patience, but most importantly, itÕs an intense collaboration and partnership with multiple stakeholdersŃother people working on the same solution, partners that are impacted by this, users that want this. It requires a level of collaboration where I want the center of gravity to stay on impactŃto stay on the outcome we want to see in the world. So, I might be wrong, but thatÕs where I had conviction that this really needed to be a nonprofit. Carolyn Ford: You are a woman in techŃand letÕs face it, itÕs still male-dominated. Has that been a challenge for you? Have you felt like youÕve been in the middle of systems not designed for you? And how have you navigated that? I mean, clearly youÕve navigated it well. But how has that shown up for you in your career? Janet Kang: Yeah, and IÕm sure you have as wellŃbeing a female executive at a tech company. I think the experience is different for different folks. I was at a panel and someone said something really cool. She said, ŅIf thereÕs no space for youŃno empty seat for you at the tableŃthen bring a folding chair.Ó I thought it was hilarious. You just walk around with a folding chair wherever you go. ItÕs a great analogy. The problem I saw was that oftentimes there wasnÕt even space to squeeze in a chair. So the challenge for me was: then you have to carve out a new role, set different expectations, create a different opportunityŃto lead them into a bigger room where there is space for more people. I embraced that challenge. I really liked it. I come from a background where I had to move around a lot, and constant change was always exciting for me. And I really didnÕt do this alone. I had a lot of incredible people around me thatŃto continue that analogyŃwould say, ŅActually, take my seat. I can stand.Ó Having those kinds of mentors, sponsors, friends really mattered. I try to pay it forward and I continue to seek those people out because it is all about those relationships, helping each other. Yes, the tech world is notoriously bad in terms of diversityŃnot just gender, but all forms of diversity. But I think itÕs getting better. The numbers go up slightly every year. To me, itÕs all about embracing the challenge and, when you do get that seat at the table, being able to pay that forward. Carolyn Ford: I love that you said youÕve had mentors and sponsors. The truth is, I have found those throughout my career. Man or woman, there are plenty of people that want to see you succeed and that say, ŅHere, take my seat,Ó and see the value in diversity and do make room for us. And itÕs still up to us toŃAs we were talking, I just thought about how passionate you are about this. That is palpable. And when you have a passion like that, itÕs kind of unstoppable. What advice do you have for leadersŃespecially womenŃwho are just entering this space? Janet Kang: I feel like IÕm still becoming. IÕm still learning, IÕm still growing, so I donÕt know if I have very good advice. But the biggest shift for me in the last five years or so, thatÕs really helped me in deciding where I go next, is shifting my time horizon to be much longer. Looking at the next decade, maybe even the decade after that. Selecting roles, ŅWhatÕs the biggest, best, fastest thing I can build?ÓŃwhich is what I used to do earlier in my career. Now itÕs more: ŅWhat will I build that will stand the test of time? What will I build that I can really stand behind?Ó IÕm a mother of two young children. They say when you become a parent it really changes you. It really, really did for me. I live by that clichˇ. I imagine the world I want to help create for when my kids are older, and that allows me to imagine a much longer horizon in making decisions. ItÕs also how I treat others and how I mentor them. I try to look at the long game. ItÕs still hard because we all have near-term goals we have to hit, but I make that extra effort to look ahead. Carolyn Ford: Looking at the long gameŃI think I, too, needed to become a parent to even think about it. I didnÕt think about the long game until I became a parent. And it is such a mind shift. Janet Kang: They ask you such good questions. Ever since my older girl was two years old, ever since she could really speak and understand, if you asked her, ŅWhat does your mother do for a living?Ó she would say, ŅMy mama builds companies.Ó Because I was a venture builderŃI built companies. That was more my job than just the one thing I built. I could see that she would then ask, ŅWell, what kind of companies?Ó And I wanted to be able to answer that question with conviction. So I think that slowly shifted my thinking into, What am I building? Am I being part of a future IÕd like to see for her to thrive in? Carolyn Ford: I love that. YouÕre building the future. YouÕre building a beautiful future for your children and for all of our children. What shapes your perspective on AIŃwhether itÕs personal experience, research, something else? And what was your personal ŅahaÓ moment around AI ethics? You talked about it a little bit in your last company, but when did you say, ŅRight, IÕm done here. IÕm going to do this insteadÓ? Janet Kang: In terms of my perspective on AI, it really comes from two core roles I have one is as a builder. I like building solutions. I like solving problems, and IÕve built a lot of solutions using technology. IÕve seen the early stages of AI deployment and how thatÕs evolved over time. So one perspective I hold is understanding what the real world is like outside of regulation, outside of thought leadershipŃwhat builders have access to today in terms of standards, protocols, and tools to keep AI safe. The other side goes back to my role as a parent, my ability and my curiosity around imagining longer time horizons: what this will become 5Š10 years from now. ItÕs really a combination of the twoŃunderstanding the real-world context and, if we go at this pace, what the world will become. I donÕt want to bring too much fear into the conversation because I see this as a challenge, not just something to be scared of. ItÕs a challenge we can solve if we get the right minds together. But we need the counterbalance. If everything is going so fast and AI is so powerful and capable of all these magical things, where is the counterbalance to make it safe? Where is that infrastructure around responsibility and accountabilityŃnot just thought leadership, but actual tools to make it possible? So those two things drive my perspective. As for the ŅahaÓ moment, for me itÕs less of a single moment and more of a slow creepŃan accumulation of thoughts, things IÕve seen, read, and heard from people experiencing harm. Everyone talks about the threat around AI and superintelligence as this crazy futureŃthe doom and gloom of robots flying around and taking over humanity. ThatÕs very warranted; we need to think critically about that, that is a real threat. But in parallel, we have another, equally scary threat thatÕs already unfolding. ItÕs not crazy robots, but how AI is penetratingŃwithout safeguardsŃinto the infrastructure of our day-to-day: into our healthcare system, education system, how we hire people, into HR systems. Government agencies are using AI that doesnÕt have a lot of safeguards in place. Perhaps they rely on off-the-shelf filters, which I know from experience donÕt always work. What we read online in public reports of harm is really just the tip of the iceberg. The bigger risk underneath continues to grow, and not enough people are talking about it. Right before you hit record, we talked about how the way we use AI changes the way we communicate with other human beings. People start to develop become very dependent on whatever form of AI they use. Carolyn Ford: YesŃand IÕm going to interrupt here. You said something that I hadnÕt thought about: that AI has perhaps put a filter on what we think is Ņgood.Ó I said, ŅI donÕt know if I can write anymore. And when I do run it through AI, I feel like itÕs betterŃbut itÕs sort of akin to the supermodel on the magazine cover.Ó WeÕve got this perception of what ŅgoodÓ is because itÕs what we see all the time now because of AI. Janet Kang: Exactly. You can no longer look at a piece of writing or a work of art and say with confidenceŃat least I canÕtŃthat it was created by a human and not AI. Carolyn Ford: Yes. AI stole the em dash from me. That was my go-to tool. If I ever use it now, IÕm like, ŅOh, I canÕt use the em dash.Ó I always used the em dash. It was one of my favorite things, but now I steer away from it because itÕs such a tell. Janet Kang: I knowŃthatÕs become a giveaway. Not only that, but you start to see the writing styles of the different AI models. You kind of know whoÕs had help with Claude, versus Gemini, versus ChatGPT, right? Carolyn Ford: YesŃChatGPT, yes. Janet Kang: Em dashes are more ChatGPT than Gemini or Claude, just as an example. To me, itÕs changing how we write, how we perceive information, how we evaluate information. ItÕs changing how we learn. Those are, to me, the lower risks. The bigger, higher-order risks are the ones you donÕt really know about. Who makes welfare decisions? You can say, ŅOh, itÕs ultimately made by a human,Ó but the filters down below are AI-powered. So who gets left out? Who gets included? ThatÕs already happening now. IÕm alarmed that weÕre rushing to deploy AI when we donÕt have the right evaluation tools around what itÕs doing to society. So, yes, I see those two parallel threats as real not to be scared of, but as a challenge. I want people to think about it and then roll up their sleeves and ask, ŅOK, so what can I do next?Ó So, not an ŅahaÓ moment, but an accumulation of what IÕve seenŹ that led me to say, ŅI think enough is enough. IÕm ready to jump in and have this be the next stage of my career.Ó Carolyn Ford: I think thatÕs true for most things, itÕs a slow creep to quote you. You said the risk goes up as you get into younger population. IÕm guessing thatÕs because those of us who have had more experience have developed critical thinking on our own. The younger you are, those muscles are still being developed. So the younger population, that maybe hasnÕt learned critical thinking as much, theyÕre becomingÉlike AI is sort of a drug. Is that a good way to phrase it, or what am I missing? Why does the risk go up? Janet Kang: I think youÕre right. The maturity level, our development of thought and critical thinking is so important when you interact with AI, especially when it starts to creep outside its context window, which we see happening a lot. ThatÕs where you see public reports of real implications for younger users. WeÕve also seen so much of this with social media. You could say itÕs related; the algorithms are now AI-powered. Deepfakes, for example, how that's exacerbating bullying in schools and in younger populations and older ones too but that vulnerability is there. As an adult, I call it the Ņescape button.Ó We know our escape button. We know when weÕre doom-scrolling on social media, going through TikTok for 30 minutes, and then itÕs 45 minutes. Adults have a lower threshold for that escape button; maybe itÕs life experience, maturity, all of that. Younger users donÕt. They are lost in that fog, the AI fog of that perpetual cycle. ThatÕs where I think thereÕs not enough consumer awareness, for parents, to really understand what theyÕre putting in the hands of their children. And the crazy thing is, we all read that the tech billionaires working on these AI models donÕt let their kids use them. Mark Zuckerberg doesnÕt let his kids use Facebook or social media, IÕm pretty sure. So yeah, we have to be especially careful with the younger population. ThatÕs likely where our go-to-market is for the solution weÕre building, because if things go wrong there, itÕs really bad. We all know the problem exists. There are really no solutions around it today. When I speak to prospective partners, they get very excited and say, ŅDo you have a solution tomorrow?Ó We have big school districts saying, ŅWe want to promote AI use in all of our schools because we donÕt want to be left behind. But where do we start? What tools do we recommend? What do we tell our kids parents,Ź what do we tell ourteachers?Ó There is really no answer to the ŅOK, now what?Ó And thatÕs what IÕve been very fixated on. Carolyn Ford: A few days ago, a petition was signed by, I think 850 celebrities, from the Ņgodfather of AIÓ Geoffrey Hinton to Steve Wozniak, celebrities like Joseph Gordon-Levitt, we had Steve Bannon signing this petition. This petition is about putting a halt to superintelligent AI development, which is fascinating to me that anyone thinks thatÕs even a possibility. It shines a light, for sure. You wrote a post on LinkedIn. Listeners, go find Janet on LinkedIn and read this post, but I want us to break it down here about what the path to AI accountability is. From what I could tell, the petition basically says, ŅHalt the path toward AI superintelligence.Ó Talk about how you feel about that petition. Janet Kang: Yeah, absolutely. It says, ŅPause superintelligence development until such time there is shared agreement around principlesÉ,Ó yadda yadda yadda. I want to make it clear: itÕs not their job. This is where the incentives are just not there. Very important people and IÕm glad this is bipartisan, everyone agrees say that unregulated, off-the-rails superintelligent AI is not something we are ready for. No one is disagreeing with that, except for the people building it. I think itÕs great that this is happening. IÕm disappointed that it shouldnÕt just stop there. You sign a piece of paper, you get all these celebrity names, and perhaps it helps with regulation. Maybe this inspires policymakers to take this more seriously. I think it will serve that purpose, and I hope it does. But to ask a frontier model company like OpenAI to really pause their main goal is like asking a game company to make its games less fun. ItÕs not their job. They have a fiduciary duty to their shareholders, and the name of the game is growth. What I think should happen, in addition to petitions like these, which I support, is solutions around that. I donÕt think thereÕs a single solution to this massive problem thatÕs shaking up our civilization. It should be multi-pronged. It could include things like open protocols, standardized frameworks around how we talk about ethics in AI. When I say ethics, I include things like compliance and security. ThereÕs a lot of that with AI thatÕs the legal floor, absolutely needed. But I want to work toward the ethical ceiling, because AI is not static. It evolves with the conversation. You have to test it aggressively not just in your average usage, but in your failure modes. Not just structured prompts, but adversarial prompts. Imagine multiple different user personas and test messy, real-life human centric usage. I think the solution includes open protocols and frameworks, audits by independent entities for existing products especially for vulnerable populations and eventually an actual tool that allows you to see and assess risks in real time. Just because you passed an audit doesnÕt mean youÕre safe forever. AI stacks are not one model; theyÕre multiple models together, and you update them frequently. So having a real-time tool is something we need as an industry. I wrote a bit about that on LinkedIn. My mantra is: thought leadership is great, but itÕs very passive. The active thing is answering the ŅSo what?Ó Where are the tools? Where are the real frameworks? Having been a builder myself, IÕll tell you: there are very few options to make AI safe. You can take off-the-shelf safety filters, bias detection layers, and what not. They do part of the job. But unless you customize them for your use case and continue to simulate different uses especially vulnerable use cases itÕs still not enough. And thatÕs just for the percentage of applications that even use those filters. I imagine many donÕt. So IÕll get off my soapbox, but yes more action, less talking. ThatÕs what IÕm here for. Carolyn Ford: What groups are you working with mostly right now? Janet Kang: Right now, IÕm mostly working with companies and products, because I want to make sure that if I were auditing or if were to build a framework, it would be used in the building process. I think consumers are either really scared, skeptical, or fully adopting AI. I want to increase awareness around the risk of using AI products, especially for younger populations. But thereÕs already a lot of work going on in that space thereÕs a lot of AI ethics advocacy, especially in nonprofits. My mantra has always been: letÕs solve the AI ethics problem for builders so they actually have good options. If you want to do the right thing, you have several options to implement right away that are accessible and transparent. These options are independent and not tied to some corporate gain. ThatÕs where IÕve chosen to build solutions. Carolyn Ford: When you say you are going to put a frame work in place, give me an example of what would be for my company. Janet Kang: Absolutely. A framework would really be the starting point best practices. WeÕre in the process of evaluating different AI models for various use cases and how they test across different ethical dimensions. Some of this data is already available. Stanford, for example, does a lot of safety and compliance-related leaderboards for different models; they give you part of the answer. WeÕd expand on that to understand the classifiers youÕre using and how they relate to your specific use cases. So there would be a starting framework. What youÕre talking about poking holes and testing thatÕs more part of an audit process. Audits look very different for different organizations. For an education technology software say an after-school AI tutoring application you know the use case and user personas. You would then stress-test this system. Some of that testing is automated; some requires human-in-the-loop testing to be able to test failure modes and edge cases: how much goes wrong, how big the risks are, how much it goes outside its context window. For a company like yours, an enterprise corporationŹ with lots of knowledge workers and specialists, that would require an assessment of the different waysŹ your employees are using AI today, how they hope to use AI, how much customer data you retain, and how much of that is required in the work you do and and that's more of an analysis across the board and then that would be probably likely a few different kinds of audits in place. But from an ethical standpoint, you know, in what parts of decision making and what parts of high consequence scenarios are using what kinds of models to be able to say this is the report on, you know, almost like a heat map of where your biggest risks are. And what are some recommendations to mitigate those risks, right? What I donÕt love about the world right now is that, because we have so much AI already implemented, weÕre doing a lot of this after the fact. ItÕs like building a whole bridge and then saying, ŅLet me test it for safety.Ó Carolyn Ford: ThatÕs where I immediately took you, right?Ź And you're like, well, wait, wait, before you even implement, you need to build this framework to implement with. Your best practices? Janet Kang: And in an ideal world, before you even implement, youÕd build the framework into your implementation. ThatÕs where best practices come in. So yes in an ideal world, we build those things first. But in the real world, more people are candidates for audits than for using evaluated, pre-designed models. WeÕre trying to do both. I have to prioritize, but thatÕs the reality. What I urge companies to think about is the desire to be audited the desire to know where your risks are. Not just for ethics; itÕs similar to what happened with climate and sustainability a decade ago. I liken this to climate and sustainability, people did sustainability as lip service part of PR. Now that thereÕs real regulation, itÕs a permission-to-play. You have to make sure your whole supply chain is audited. I think that will happen, whatŹ IÕm looking for early adopters to put this in the boardroom conversation, to make it a priority in their strategic plans, because you want to look ahead. From a commercial standpoint, itÕs also how you protect your brand. Carolyn Ford: All right, letÕs flash forward 10 years. If everything goes right ethical AI, inclusive innovation what does that world look like for you? Janet Kang: I think that world, just to put a meme on it, is one where AI has less Ņmain character energy.Ó and becomes more part of the infrastructure Right now, we talk about AI like itÕs the main character, its what the story revolves around. If we do this right, AI becomes part of the infrastructure. If I flip a light switch, I donÕt think about all the electrical circuits behind it. Right now, weÕre probably building electrical wires that go through all these rooms without any circuit breakers. IÕd like us to get to a place where AI is simply infrastructure and weÕre talking about the innovations weÕre building on top of it. Because itÕs standard protocol to have safeguards in place for your use case. Much of that worry is mitigated, and youÕre actively thinking about innovating. People are innovating aggressively in this space itÕs not that it isnÕt happening but we probably need to scale back a little in the short term to get to that bigger goal. Carolyn Ford: I like that analogy of the electrical wiring. We have safety protocols in place so we donÕt burn ourselves down. It took a while to get there, and now we can flip the switch and IÕve got the light. We need to do the same thing with AI. Janet Kang: ItÕs like cybersecurity in the Õ90s too. Now we have all these standard protocols, but back then you didnÕt have penetration tests and red-teaming as standard practice. Only when you had a big hacking incident or breach were you struggling to figure things out. WeÕve come a long way since then. I think AI especially ethics around AI really needs to catch up. Carolyn Ford: WhatÕs one measurable outcome that youÕve seen in your work so far? I know youÕre only five months in, but youÕve been doing for a while. WhatÕs a measurable impact youÕve seen from implementing ethical frameworks or purpose-driven strategies? Janet Kang: I have to admit, given that this is still the Wild Wild West of AI and ethics, I donÕt have a clear, tangible metric to share that would really make the case today. I can draw analogies to whats happened in cybersecurity, and there are really great products being built with these principles in mind that I canÕt wait to tell the world more about. It just hasnÕt been done in a structured way where theyÕve been assessed, and we donÕt yet have the vocabulary to talk about this in a way that goes beyond meeting baseline data security protocols at your base line level. So more to come on that, Carolyn. Carolyn Ford: All right, letÕs look at the lessons weÕve learned in cybersecurity and electrical wiring and apply them to AI. LetÕs move to our Tech Talk questions. These are just fun, gut-reaction questions. If you could beam one piece of todayÕs tech back to the 1970s, what would you choose? Janet Kang: A Roomba. Carolyn Ford: Really? Janet Kang: Because I imagine in then Õ70s and Õ80s, when youÕre thinking about futuristic models, youÕre thinking flying cars and Terminator robots. But its this peaceful, quiet, little friendly thing that moves around and cleans your house. ThatÕs me manifesting the hope that these amazing innovations can help our day-to-day in positive ways that are just part of your norm. They donÕt have to be this hard pivot in the way we work and live. Carolyn Ford: Yes. What is the goal for AI is it AI for AIÕs sake, or is it so I donÕt have to vacuum my house anymore? Janet Kang: Exactly. IÕm still waiting for all the promises around how robots that clean my house and fold my laundry. WeÕre still waiting. Carolyn Ford: I canÕt wait for that one. WhatÕs one AI myth that you wish would just disappear? Janet Kang: That AI harm and risk is only in the future. WeÕre living in the thick of it now. Carolyn Ford: We are seeing it now. What are you reading or watching right now thatÕs shaping how you think about AI, tech, ethics, the future? Janet Kang: I keep up with the markets and tech updates daily thatÕs just something IÕve always done, being in this space. But books: IÕm a slow reader, but I just finished Empire of AI by Karen Hao, which is a great read. And IÕm starting The Alignment Problem by Brian Christian. I know IÕm late to that book everyoneÕs been talking about, ŅOh, you must have read this.Ó So I urge everyone to read The Alignment Problem. I hear itÕs great and I canÕt wait to get into it. Carolyn Ford: OK, I love a good reading list. IÕve got to give you one. I read these books by N.K. Jemisin she wrote the Broken Earth series. There are three of them. You made me think about them early on when we started talking, because you said you started thinking about things like the long game. These books think about things in millennia. I donÕt know if youÕre a fiction reader, but theyÕre beautifully written and really talk about the long game. Janet Kang: I love that. I am a fiction reader. OK, IÕve got Broken Earth on my list. Carolyn Ford: So, for anyone looking to lead more ethically in tech and AI or even make the leap from profit to purpose whatÕs one step they can take today to start? Janet Kang: You have to do the work. ThereÕs no secret pathway you actually have to sit down and do the work. If youÕre a product builder and you want an ethical path, go re-imagine your user journey. Think of all the failure modes. Be honest with yourself: Are you really using the right models? Do you really have the right safeguards in place? Do you really have the right protocols? Stepping into nonprofit mission-driven work is not for everybody. I feel itÕs a great privilege to have the time to reflect and think about this next chapter. But if youÕre given the opportunity to do it, take it. You live this beautiful life. IÕm always about the opportunity that excites you. ItÕs not just comparing two things which one is better; itÕs the long-horizon game: if you see this role evolving in 5Š10 years, how does it create the future you want to be a part of? Carolyn Ford: So when the opportunity comes to you, say yes. Janet Kang: Just say yes. Carolyn Ford: Just say yes. Where can listeners find you or learn more about your work? Janet Kang: Our website is justhorizons.org, and you can find me on LinkedIn. Please connect with me, follow me. I hope to be publishing more of our work, but IÕm also spending a lot of time doing the work, so I hope youÕll join me in this action-oriented journey around AI. Carolyn Ford: When I asked you what people can do as a first step, I thought: ŅFollow Janet.Ó Because you are doing the work, youÕre up on tech, and youÕre posting about it on LinkedIn. ItÕs a really quick way like, thatÕs how I found out about this petition that was just signed four days ago, right? It was October 25th or something. Janet Kang: Yes. When the AI Action Plan came out, it was alarming to me that no one had posted about it on LinkedIn. I have a pretty broad network and I thought, ŅWhy isnÕt anyone talking about this?Ó We have so many distractions going on. So, yes please follow me. And just keep up with everything. ItÕs hard; things move fast. But I think this one is important. We canÕt wait on this. Carolyn Ford: I agree. Well, thank you so much for your time today, Janet. Janet Kang: Thank you for having me. This was fun.