Ep. 4-05 Artificial Intelligence | Everything's Political [00:00:00] Anne Jane Malabanan: I'm just worried about fraud, um, the misuse of AI can be, um, intimidating in terms of fraud. You can fake voice clips, you can fake statements in social media through, uh, actual influencers, political figures, um, those social media statements. That can be AI generated, that could be misuse of AI, fraud, um, which is really, um, it's scary to think about the fraud. [00:00:28] Anne Jane Malabanan: Um, as AI advances, maybe regulation will advance with it. Maybe the, um, Consequences of misuse and fraud will advance with it as well. [00:00:45] Junius Williams: Hello, my name is Junius Williams, your host on Everything's Political. And we ask, this season, if everything's political, what do young people think? So we've got some young people here. They definitely are young. And they're going to introduce themselves. I understand they are from Technology High School in Newark, New Jersey, one of the nationally ranked high schools. [00:01:15] Junius Williams: I'm going to emphasize that. Nationally ranked high schools in the United States by whoever does those kinds of things. So, uh, let's go. To my extreme right and tell me who you are. [00:01:27] Alex Chen: My name is Alex Chen. I'm a senior at Technology High School and I'm interested in computer science, specifically in security and privacy. [00:01:38] Anne Jane Malabanan: My name is Anjane Malabanan. I'm a senior in Technology High School and I'm interested in civil engineering and business. [00:01:45] Junius Williams: Very good. Very good. Uh, and I understand you went to the governor's state of the state address, Alex. [00:01:53] Alex Chen: Uh, yeah, I was invited as a guest for, uh, on the topic of AI. [00:01:59] Junius Williams: Oh, oh, on the topic of AI. [00:02:02] Junius Williams: Very good. So that's what we're going to talk about here. We're going to talk about artificial intelligence. And he already used a set of initials that I have to get used to. AI, you know, that, that could mean anything back in my day. But AI, artificial intelligence, and that's what these young people are going to be talking about today. [00:02:26] Junius Williams: I mean, we can have war, we can have fights, we can have all kinds of stuff. But the constant thing that keeps coming up in one way or another is artificial intelligence. So, tell me, how does, uh, artificial intelligence, uh, Fit into your daily life at this point [00:02:43] Alex Chen: in our daily lives. Uh, well, it's, it's being used experimentally all around us. [00:02:50] Alex Chen: You can see in advertisements and how they try to, uh, appeal to your preferences. So they take data that you give to them through your. Uh, usage based on your phone, your watches through, your searches through, all that. And then they spit out advertisement based on the data they collected. So that's one way AI has been, uh, collecting and learning from different users. [00:03:16] Anne Jane Malabanan: As he said, um, AI is like generally impacting my life. I don't use AI as a tool personally, but, um, generally speaking, AI is an advance advancement that's, um, being progressed on, um, in a daily manner. So, generally, I'm being affected. Personally, I'm not really using AI as a tool. [00:03:37] Junius Williams: How's it being used? [00:03:40] Anne Jane Malabanan: Um, in the education academics, I've seen AI being used to, um, usually, Kind of generate prompts as in literature, literacy, like prompts for essays. [00:03:54] Anne Jane Malabanan: People use these, um, chat GPTs, uh, to create essays for their own. So in a personal use, students use those, um, websites such as chat GPT to formulate those, um, essays for their education, which is, um, that kind of leads to a bigger, um, bigger area of AI and how it's being, um, what's that term for it? Um, [00:04:27] Junius Williams: plagiarizing. [00:04:28] Anne Jane Malabanan: Yes. Plagiarizing, um, in terms of AI, but what's that unregulation and, um, over reliancy. They're relying on AI, like in terms of education, they don't, um, They don't use their own knowledge in these, um, subjects like literature, and they're relying on these websites that are AI based. [00:04:56] Francesca Larson: It reminds me, um, when I have these conversations now, back when spell check first started popping up of, well, now folks are never going to know how to spell. [00:05:07] Francesca Larson: And it's going to destroy the English language or all language globally because people don't know how to spell and then they're not going to know how to write. How do you feel about AI being relied on? So it sounds like over reliance is potentially an issue, but what about some form of reliance? And where should we be relying on AI? [00:05:32] Alex Chen: I think AI, um, will be effective in making some tasks more efficient. Like, we talked about earlier, over reliance. Um, is spellcheck, does it count as, uh, using it? Is over relying on spellcheck really that bad? Would it? Would it reduce, uh, uh, people's mental capacities? Will over relying on AI, uh, make people over reliant on using it as a tool, rather than learning and being able to formulate their own, uh, projects? [00:06:18] Junius Williams: Yeah, I've been reading about that. But, uh, I have one article here which says that, uh, Uh, the experts look at what AI has done, uh, helpful, yeah, helpful tool or wrench in the business model. This is a New York Times article, uh, it says chat GPT can be an aid for creative tasks, but it can also lead to some mistakes. [00:06:51] Junius Williams: And then they go on to talk about, uh, Well, on a task that required reasoning based on evidence, CHAT GPT was not helpful at all. In this group, volunteers were asked to advise a corporation, this was a theoretical corporation, and uh, it uh, didn't do as well. Unaided humans had the correct answer 85 percent of the time. [00:07:18] Junius Williams: Uh, people who use chat GPT without training score just over 70%. Uh, so does that kind of destroy the myth that, uh, AI is, uh, It's going to be the catch all and the end all to learning. [00:07:34] Anne Jane Malabanan: I think AI is a technological tool. And just like every other technological tool, it's used, um, to support you finish a task with, um, finish a task, uh, which it, which would be less time consuming and it'd lead to more efficiency. [00:07:53] Anne Jane Malabanan: Um, but just like every other tool, it could be used incorrectly. That's where I go back to over, um, Reliancy. Um, chat GPT is a very broad, um, tool. It gives you the answer that you need, but when you rely on it too much, um, if this tool is used incorrectly, then that knowledge that you get from chat GPT, it's not yours. [00:08:20] Anne Jane Malabanan: And sooner or later, you're going to depend on this website to answer all the questions for you. And you're not going to be able to think for yourself. That's when you need to learn how to use AI properly. It's a tool that you need to learn how to use properly. Like you, you can't rely on AI too much. [00:08:39] Anne Jane Malabanan: You're gonna lose knowledge at some point. [00:08:42] Junius Williams: But it's out there and, and isn't the human, the human condition such that people are gonna Definitely over reliant. Here's another little report here. Another little snippet from this article. In the near future, language bots, like OpenAI, ChatGBT, and Metas Llama, and Google's Gemini, are expected to take on many white collar tasks, like copywriting, preparing legal briefs, and drafting letters of recommendation. [00:09:15] Junius Williams: The study is one of the first to show how the technology might affect real office work. and office workers. Isn't that going to happen? And isn't that probably happening now? [00:09:27] Alex Chen: On the subject of job displacement, I think AI, for some tasks, for simpler tasks, it will start replacing human workers. But at the same time, it can also open up job opportunities for other jobs that help build AI or require more human interaction than, per se, simple tasks that can be automated with AI. [00:09:57] Francesca Larson: Does that worry you? Do you, are you excited about what that looks like for the job market? You're in high school, clearly not quite stepped into the full career path yet. Does that excite you? Does it worry you about what jobs are going to be out there for not just you, but also your family and your friends? [00:10:19] Anne Jane Malabanan: I'm not really excited nor, um, kind of intimidated by this AI thing. Um, I think people kind of fear change, more importantly technological advances, only because it's, um, this sort of intellectual competition. We as humans are naturally, um, we naturally are intimidated by anything that is intellectually, um, competitive. [00:10:46] Anne Jane Malabanan: That's why people are kind of scared of AI. And I don't, I, I personally believe that AI is not something that's going to be taking over and killing off all the jobs. Um, as I've heard, uh, our governor Phil Murphy mentioned, um, in yesterday's state to state meeting. [00:11:05] Alex Chen: Uh, state of the, uh, state address. [00:11:07] Anne Jane Malabanan: Yes, uh, he addressed AI and how people fear that AI might take over and kill off the jobs, but AI will provide discovery, which will benefit medical fields. [00:11:18] Anne Jane Malabanan: It will benefit educational fields, so workforces such as that will get benefited, and it won't kill off jobs, but it will benefit jobs. [00:11:29] Junius Williams: We shouldn't be worried so much about killing the jobs, but uh, shouldn't we be concerned about cheating? And here's a, here's a, the rapid growth of artificial intelligence is testing the boundaries of copyright law. [00:11:44] Junius Williams: Now, if you assume that people have a right to their intellectual information, then should somebody else be able to come in and just use it and not compensate? Here's a, here's a, a lawsuit. This came out, I don't have an exact date for this article, but, uh, the, uh, other part of this little duo that I have here for you. [00:12:10] Junius Williams: Is uh, New York Times is suing OpenAI and Microsoft because the newspaper uses its articles to train chatbots. This was the deep intelligence that you were talking about. Somebody's got to come up with the information. Do the people get compensated for all of the information that, uh, Artificial intelligence seems to be able to swallow up and disseminate so freely. [00:12:41] Alex Chen: Well, it's a, it's a very controversial topic. I think that there's arguments for both. Is the intellectual poverty taken from the original source, uh, who's, there's no accountability, uh, concrete accountability set in place for AI. Do we blame the AI itself, whoever designed and created it, do we, uh, do we blame who's the one inputting the information into the robot, uh, into the AI? [00:13:13] Alex Chen: It's, for me, I think it's more on the person using the AI more so than the companies that design it, but I think it's also up for Uh, it's up to the company itself to regulate who's using their programs so open sources, uh, can't misuse their [00:13:37] Junius Williams: products. And I appreciate your questions and that's why lawyers will always be in business. [00:13:44] Junius Williams: Because you see, each one of those issues I'm sure is being raised by the lawyers for both sides in this lawsuit. Maybe they're using artificial intelligence to write their briefs. Just kidding. [00:14:00] Francesca Larson: They're, they're using it to write the things that nobody reads. [00:14:04] Junius Williams: Somebody reads them because somebody That list that [00:14:06] Francesca Larson: you gave earlier. [00:14:07] Francesca Larson: It's all the documents people have to create, but nobody wants to read. [00:14:11] Junius Williams: But somebody's going to read these because there's money involved. [00:14:15] Francesca Larson: As you're thinking about, um, the data set that's involved, so one of the things that comes to mind for me is, Is the New York Times does a lot of incredible reporting over the years, they produce a lot of data. [00:14:28] Francesca Larson: Uh, when we think about even the results from chat GPT or other services that are similar. One of my biggest concerns is the data set that it's relying on. Um, and whether it is representative of our, our culture, of our demographics, of our full history. Do you think that this is something that we need to be concerned about? [00:14:58] Francesca Larson: Are you not worried about it? Um, it's something I'm, I'm older than y'all, clearly, a little bit. Um, and it's something that I'm worried about, but should I not be? [00:15:09] Alex Chen: Wait, can you repeat the, the question? [00:15:11] Francesca Larson: The question is, should we be worried that the data set that, um, our AI services are learning from is not inclusive enough of our, um, Entire history, or all of our demographics. [00:15:28] Francesca Larson: Um, so for example, um, is an answer that I get from ChatGPT going to acknowledge, uh, slavery in the United States if it's not in textbooks as an example? [00:15:45] Alex Chen: I think depending on where they're pulling their data sets from, uh, that's up to the company designing the AI. If there's any biases or prejudices built into it, that's where the human error aspect can come in to AIs. [00:16:02] Alex Chen: It's taking data from humans. And then learning from that. So, and it's been proven, I believe there was a study that showed that, uh, an AI did learn prejudice, like there was prejudice built into their answers. So it is entirely possible that, uh, and concerning that prejudices and biases can come from AI simply because they learned it from, uh, people. [00:16:27] Francesca Larson: What about [00:16:27] Anne Jane Malabanan: you, [00:16:28] Francesca Larson: AJ? [00:16:28] Anne Jane Malabanan: What are your thoughts on this? Um, hearing this topic of conversation right now, like prejudice being involved in AI, that is very scary because AI is like, it has, it strictly has no human morale, no human ethic, it's just simply computation, simply infographics, and if human bias is playing into these AIs because of the companies they're being built from, that's certainly a concern. [00:16:56] Anne Jane Malabanan: Do we need to regulate it? How [00:16:58] Francesca Larson: do we regulate it? [00:16:59] Alex Chen: Regulations up for debate. How much can the government regulate these companies? I believe because we're in a democracy, our rights are pretty important. Can the government regulate how much the company can do with their technology? Since this is about data, how much of our privacy is actually being exposed to these AIs? [00:17:28] Alex Chen: How much data can AI gather from the people, and who can actually access this data? Because the AIs are being owned by companies, that means these companies also have access to your data. So how much of our privacy is being infringed on, that's a, that's of a major concern. So regulation is going to be important. [00:17:50] Alex Chen: How we're going to regulate it is still being discussed to this day, I believe. [00:17:56] Junius Williams: So would you feel more comfortable with the government regulating than with the company itself who's using the AI? I think you said earlier that the company should regulate it, but maybe there has to be something, since we are in a democracy and people elect the government, maybe the government should regulate the content, the amount, the time. [00:18:22] Junius Williams: countenance for all the prejudices, should there be some kind of trade off and our freedoms on that? [00:18:29] Alex Chen: It's also a matter of trust. Do we have trust in the government regulating? Who's to say that the government can't also misuse AI and their regulations? Same goes for the companies as well. So there isn't much concrete, uh, there isn't an entity for us to trust. [00:18:50] Alex Chen: As of right now, but I do believe that the technology is very helpful for the advancement of our society. [00:19:00] Junius Williams: Well see that's an interesting question. Who do you trust? . I mean, I think the, the computation for the value of, of, uh, open ai, which I think is probably Microsoft if I remember correctly. It's valued at about $80 billion. [00:19:16] Junius Williams: Now you going to trust Microsoft to regulate an $80 billion operation. Or are you going to take a chance on the government regulating? [00:19:26] Anne Jane Malabanan: That's a really tough question. I don't think we've seen enough faults or benefit coming from either a government running this AI or the company running this AI. So I think from I think through this advancement, it's through time that we see who we can trust regulating this kind of advancement. [00:19:48] Anne Jane Malabanan: I have no opinion on whether or not I would prefer the government to run it or the companies to run it because trust is like a very strong thing. So whether or not I would trust this company or this government to run it, I think I need to see more Through time. Would you trust a Reddit [00:20:09] Francesca Larson: thread to run it? [00:20:11] Francesca Larson: Oh, oh God, no. God, no. Oh. [00:20:14] Junius Williams: Alright. Well, let's, [00:20:14] Francesca Larson: and, and ranking. Who would you trust? More government? Uh, maybe a teacher. Just any teacher that you happen to have right now. Uh, Reddit. [00:20:28] Alex Chen: What is Reddit? Reddit is a, a social media platform used to share, uh, media across. There's, uh, they have different subreddits, so different communities, uh, all, uh, all accumulate based on their interest. [00:20:44] Alex Chen: Uh, in terms of asking who would I trust more, uh, that's, that's gonna definitely differ per person. Of course, I'd trust people I know, but in the grand scheme of things, I think I would trust the company more because companies, I think their interest lies more in, you know, The technological advances themselves and, uh, making money as a whole. [00:21:11] Alex Chen: But if on the other hand, the government is, if you're going to have to bring in politics, you're going to have to bring in the international. If the U S gets access to, let's say more advanced artificial intelligence technology, that's going to bring about problems in the international community. And then. [00:21:32] Alex Chen: Well, I can't speak more on that. That's a matter greater than what I can speak of. [00:21:39] Junius Williams: Isn't the government already involved with AI? Having governmental policies and practices and maybe even governmental subsidy. Haven't they been involved with the development of AI? How do you think they got it? [00:21:58] Alex Chen: Uh, yeah, they've definitely been involved in the development of AI. [00:22:02] Alex Chen: They're pushing for it. They of course want the advantages and technology that can be used to improve our nation. [00:22:14] Anne Jane Malabanan: I think I've seen the government's support in this advancement for AI. Governor Phil Murphy, he's pushing for New Jersey to be the leader of this artificial intelligence advancement. Um, in my personal opinion, it's okay for a government to support AI, but I don't think AI as a tool should be used under the, um, the aspect of politics or government because it's, it, um, handles political affairs, it handles communities, it handles nations. [00:22:52] Anne Jane Malabanan: That's all, um, representing of human morale, human ethic, and artificial intelligence should be used. Not be correlated to that and any way means possible. [00:23:03] Junius Williams: But isn't it already involved? Uh, this is just an example that I read about. Uh, somebody was able to portray an opponent, uh, in a very uncompromising position, and they did that through ai. [00:23:20] Francesca Larson: Oh, using ai image creation. [00:23:22] Junius Williams: Image creation, or image uncreation. Look, making. The example that was given here was somebody made to look as though they had no clothes on. Uh, isn't that political? Don't people use that already? Those kinds of tactics on different websites? Yeah, [00:23:44] Alex Chen: that goes back to the regulation aspects. [00:23:47] Alex Chen: Uh, AI in the wrong hands could be used to harass others. Generative AI can be used to create voice clips of people saying things they didn't say. Or, um, creating a photorealistic imagery of them that isn't real. So it's, as, as AI improves, it's going to be harder to distinguish if it's real or not. But with better regulation, I believe that we could reduce the amount of people being victimized or attacked by, um, malicious actors using AI for ter uh, for bad purposes. [00:24:29] Junius Williams: So we're back to regulation. Somebody's got to do it. [00:24:34] Francesca Larson: Well, it's going to be them. It's, it's not going to be us most likely. It's going to be your generation who takes a huge step forward in how we use and regulate and deploy AI. Is there anything that you're really worried about? You mentioned privacy, but AJ, anything that you're worried about? [00:24:57] Francesca Larson: I'm just [00:24:58] Anne Jane Malabanan: worried about fraud. Fraud. Um, the misuse of AI can be, um, intimidating in terms of fraud. You can fake voice clips, you can fake statements. Mm-Hmm, and social media through, uh, actual influencers, political figures. Um, those social media statements that can be AI generated, that could be misuse of AI fraud, um, which is. [00:25:22] Anne Jane Malabanan: Really, um, it's scary to think about the fraud. Um, as AI advances, maybe regulation will advance with it. Maybe the, um, consequences of misuse and fraud will advance with it as well. [00:25:38] Junius Williams: Yeah, cause here's, here's a, um, another example. Um, articles appearing with fake offers. Names in bios. Sports, sporting events, schools. [00:25:54] Junius Williams: My wife is a teacher and this has to do with, I don't even know if it's A. I. is considered it, but now they can check to see, uh, professors can check to see if your thesis was in fact written by somebody else. That's pretty bad if you can get out and use, uh, somebody else's, uh, thesis and put it out there as your own. [00:26:19] Junius Williams: I think that's the kind of thing that you were talking about. [00:26:22] Anne Jane Malabanan: Plagiarism fraud is a scary thing and if you can identify that, um, the student's work is AI, I think it's on the student if they get caught doing it because in the first place you shouldn't be committing plagiarism with AI. You should be using your own thesis, your own statements, your own writing. [00:26:41] Anne Jane Malabanan: Um, it's on you if you get caught and you have to face the consequences because plagiarism is, has been a bad thing even before AI. Okay. [00:26:49] Junius Williams: I got another kind, I want to shift it a little bit, uh, from specifically on AI, but it has to do with technology. And I think this is something, uh, your teacher, Mr. Ford, and I were talking about a little earlier. [00:27:01] Junius Williams: Do you think that all this technology is taking away from young people's ability to concentrate on anything other than the most high of technology? [00:27:18] Alex Chen: Well, I believe, uh, and studies have proven, I believe, uh, that technology has been reducing the intention spans of, uh, students and younger people in the generation. [00:27:31] Alex Chen: Um, I think specifically, like, apps like TikTok, Instagram, they've pro uh, they provide very short, uh, clips of content, which gives them entertainment. And When they try sitting down reading a book or a longer form of media, they couldn't sit there and concentrate, uh, for long enough periods of time. [00:27:54] Junius Williams: 47 seconds. [00:27:57] Junius Williams: That's what this one group came up with. Something called middle match, I believe. 47 seconds is all you get. By some measures, you're lucky these days to get 47 seconds of focused attention on a discrete task. [00:28:13] Francesca Larson: How many minutes are we into this conversation? [00:28:17] Junius Williams: This is truly exceptional. So, we certainly didn't have that problem when I was coming along, but does that bother you? [00:28:30] Junius Williams: And what do you do to stretch your imagination if you're in that particular situation? If you're the teacher, how do you get the I want students to concentrate when they're up against such a fierce opponent as, uh, high speed, all encompassing technology like AI. What's Mr. Ford's gonna do? [00:28:57] Mister Ford: May I make a comment? [00:29:00] Junius Williams: I don't know if you'll be heard or not, but you can make it. [00:29:03] Mister Ford: Go ahead. I could go on for a very long time about this subject. It troubles me. It is interfering with the classroom to a certain point. Uh, but being able to motivate students with interesting tasks and work and challenging them, um, to be creative in thought, um, helps extend that time period. [00:29:24] Mister Ford: of concentration. Um, I often have to tell my students it's time to leave, go to the next class, get up, close out your work and go. My classroom was a little different from your average classroom though. [00:29:37] Junius Williams: Okay. Francesca, how are we going to close this classroom? [00:29:42] Francesca Larson: Oh, how are we going to close this classroom? [00:29:44] Francesca Larson: Well, I have a question about creativity. And I think AJ, you mentioned that you wanted to be a civil engineer. Yeah. And having my experience with civil engineers is one, my god sister is a civil engineer, but also I built a house. And we were in Jersey City in a floodplain, and we had to bring in a civil engineer. [00:30:09] Francesca Larson: And the thing that I noticed with that civil engineer is that there was a set script that they were able to follow, but then there was another mountain of creativity that they had to climb. So, how do you find creative spaces? How are you both finding a space outside of AI to, um, Imagine the way that Mr. [00:30:31] Francesca Larson: Williams was talking about, uh, explore other aspects of your mind. Are you reading? Is there music involved? What else complements your experience with technology? [00:30:44] Anne Jane Malabanan: My experience with technology, I, again, I don't use AI as a tool personally. I just listen to music. I hyper fixate on these projects. Uh, Mr. Ford teaches, uh, project based learning. [00:30:56] Anne Jane Malabanan: So we often work on a lot of, um, hands on projects or just projects that would last for weeks. And I hyper fixate on, um, on them. I have a kind of a weak attention span, but when I hyper fixate on these projects, I just put music on. I, Um, brainstorm ideas. I create design plans. I have strategies and whatever works best, whatever creativity I have, um, I just put it all together and create like this sort of final draft that originates from, um, this, oh, what do I call it? [00:31:34] Anne Jane Malabanan: Um, it's like multiple drafts of ideas that I work on and if it fails, it fails. I learn from it. Um, I have no idea if that's answering your question, but that's how I. That's how I'm understanding the question you're asking. [00:31:50] Junius Williams: But, well, it was interesting. You, you do say that you have a short attention span. [00:31:57] Junius Williams: Do you think that's because of, uh, the technical, uh, field that you're in? Or just growing up as a young person, you probably got your first job? phone pretty early, and you went forward from there. [00:32:13] Anne Jane Malabanan: Uh, yes, I was spoiled as a child, so I got my phone very early. And, um, yes, I fall victim to social media and high speed, um, technology. [00:32:25] Anne Jane Malabanan: So, yes, that hindered my attention span, I can say. But, um, I'm working against it, I guess. I'm Taking time to, uh, I'm taking separate time on hobbies and stuff like that, so. What kind of hobbies? Oh, um, play guitar, read, knit, just average, um, interest. I catch on, yeah. [00:32:50] Junius Williams: You're trying to reclaim your humanity. [00:32:52] Junius Williams: Yes, okay. [00:32:55] Anne Jane Malabanan: Um, yes. Basically. [00:32:59] Junius Williams: How about you, Mr. Chen? Are you a victim of short term memory, etc? Uh, [00:33:05] Alex Chen: lately, uh, as of lately, yes, I think my attention span has decreased over the years. Uh, lately I find, I found myself not being productive at home, uh, in front of my computer. I've taken some time, some more time in the classroom or going to a library or going to a cafe to finish my work there. [00:33:25] Alex Chen: So, uh, yeah, Uh, when I don't need to use, like, certain technologies or AI when working on a project. Um, so yeah. [00:33:37] Francesca Larson: I have one other question. What percentage of your class assignments do you think ChatGBT could do well enough to get a C? Well [00:33:46] Alex Chen: enough to get a C? [00:33:47] Francesca Larson: Yeah. Not, not, not like an A student, but I don't know. [00:33:51] Francesca Larson: A C. [00:33:53] Alex Chen: I think as of right now, for a high school student, if you were hypothetically to use ChatGBT, I think you'd be able to pass most of your classes. [00:34:04] Anne Jane Malabanan: I'd say a good 70%, but in technology high school, we have these career based curriculums. Um, but the course I'm taking right now is engineering. It's not something you can chat GPT and A into. [00:34:17] Anne Jane Malabanan: It's work you have to learn yourself. And if you, even if you cheat through it, that's not going to benefit you at all in the long run. So [00:34:29] Alex Chen: Luckily, AI can't replicate, uh, real, real activities. Uh, technology has a lot of hands on activities, labs. So, you know, AIs can only write up some essay for you, but it can't build you this house or do, or do this for you. [00:34:47] Alex Chen: You have to learn that yourself and do it. Do you think that's coming? Uh, I'm sure we're able to get there one day. Not, not anytime soon, though, being able to, uh, plan out their own. So let's say architecture, for example, being able to plan out their own house and then having machines build it themselves. [00:35:12] Alex Chen: I think that technology is still a long way from now. I think as we're just automating simple tasks and work that people don't want to do. [00:35:24] Junius Williams: Well, I, something that scared me along that line, uh, I think it was in the context of the chat, GBT or maybe one of the others, uh, that the, uh, machine was able to go beyond just the facts, the factual learning that it had incorporated, let's say from just gobbling up the New York Times, but it was able to do some reasoning. [00:35:51] Junius Williams: And it was able to project what this person or persons or situation would actually achieve or how it would occur. Doesn't that worry you? That's the kind of thing we used to watch in movies and stuff like that. And, uh, oh yeah, well you can leave and not worry about that. But, uh, now you've got this thing that looks like that, maybe, that can, uh, really Replicate and activate a Junius Williams that nobody else knows. [00:36:22] Junius Williams: Last question to both of you. [00:36:25] Alex Chen: Uh, I do think the former sci fi films depicting, oh, AI's gonna take over the world is like a little bit exaggerated. I do think that, uh, people, our bright minds and future bright minds would be able to, uh, control and regulate what AI will be able to do. Uh, it, it shouldn't be able to get to the point where we'd have to think of AI as a true threat to humanity as a whole. [00:36:56] Anne Jane Malabanan: Um, I'm circling back to when I mentioned that, um, humans are naturally intimidated by, um, intellectual competition. Um, the intimidation coming from AI and how quick the generation of intelligence can be, um, and how quick the generation of intelligence can be. And how we're intimidated by it. I think, due to that intimidation, we underestimate our own intelligence. [00:37:19] Anne Jane Malabanan: And it'll be a long way from here, if even possible, that AI would take over. [00:37:26] Junius Williams: Well, I want to close by thanking both of you. And just to say that, uh, maybe it won't happen while I'm still here. Because I'm in the fourth quarter. But, uh, it would be a shame if we have to use our intelligence to fight the machine that our intelligence created. [00:37:44] Junius Williams: Thank you. [00:37:45] Francesca Larson: Thank you. [00:37:46] Junius Williams: And I really enjoyed your conversation with us. You guys are on the money. Thank you. [00:37:51] Alex Chen: Thank you. [00:37:54] Junius Williams: Everything's political podcast is sponsored by the Center for Education and Juvenile Justice, and supported by the Terrell Foundation and the Robert Wood Johnson Foundation and listeners like you. [00:38:08] Junius Williams: It is produced by Mosaic Strategies and Dream Play Media with theme music by Anthony Ant. Jackson. If you like this episode. Please subscribe to the Everything's Political podcast on YouTube, press the red button, Spotify, or wherever you get your podcasts. And if you can connect with us on Facebook and Instagram, do so. [00:38:33] Junius Williams: See you next time. And remember, stay political.