Skip to main content
Student homeUCI News home
Story

The past, present and future of AI in education

On UCI Podcast, Shayan Doroudi and Nia Nixon share expertise on tech evolutions in teaching and learning
Cara Capuano | Mon Apr 22, 2024

ChatGPT, a popular artificial intelligence tool that can generate written materials in response to prompts, reached 100 million monthly active users in January 2023, just two months after its launch. The unprecedented escalation made OpenAI’s creation the fastest-growing consumer application in history, according to a report from UBS based on analytics from Similarweb.

Responses from the world of education to an AI tool that could write essays for students were immediate and mixed, with some teachers concerned about student cheating and plagiarism and others celebrating its potential to generate new ideas for instruction and lesson planning.

Various researchers in UC Irvine’s School of Education are studying the wide range of technological innovations available to enhance education, including assistant professors Shayan Doroudi and Nia Nixon. One facet of Doroudi’s research focuses on how different technologies improve learning. A segment of Nixon’s work centers on developing AI-based interventions to promote inclusivity in team problem-solving environments.

How are artificial intelligence tools currently affecting teaching and learning? What are some of the most promising applications that have been developed so far? How are AI tools being used to personalize learning experiences – and what are the benefits and drawbacks of that approach? What’s next? These are some of the questions Nixon and Doroudi address in this episode of the UCI Podcast.

The music for this episode, titled “Computer Bounce,” was provided by Geographer via the audio library in YouTube Studio.

To get the latest episodes of the UCI Podcast delivered automatically, subscribe at:

Apple Podcasts Spotify

Cara Capuano / The UCI Podcast:

From the University of California, Irvine, I’m Cara Capuano. Thank you for listening to The UCI Podcast. Today’s episode focuses on artificial intelligence and education, and we have a pair of guests willing to share their wisdom on this ever-changing topic. They’re from UC Irvine’s School of Education – Shayan Doroudi and Nia Nixon – both assistant professors.

Professor Doroudi runs the Mathe lab. Mathe is a Greek word for “learn,” but it’s also an acronym for Models, Analytics, Technologies, and Histories of Education. The lab has a particular focus on the multifaceted relationship between AI and education, including the historical connections between the two fields, which spans over 50 years.

Nixon heads the Language and Learning Analytics Lab – or LaLa Lab – which explores the intersections of technology with learning and education, with a particular focus on learning analytics, AI and collaborative engagement. Thank you both for joining us today.

Nia Nixon:

Thank you for having us.

Shayan Doroudi:

Yeah, thank you for having us.

Capuano:

Let’s start our conversation with what makes you tick. What first drew your attention to AI and education?

Nixon:

So, for me, I was an undergrad at the University of Memphis, and I was exploring different research labs. So, I tried cognitive psychology and clinical psychology and then I got into what was called the Affective Computing Lab. And so, in that lab we did a lot of analysis and assessment of students’ emotions while they were learning. So, we would track their pupil movements, postural shifts, language, while they were engaging with intelligent tutoring systems. It was inherently a very AI-focused lab and that sort of birthed my interest in the field and all of its possibilities.

Capuano:

And what about you, Professor Doroudi?

Doroudi:

Yeah, so, I didn’t start as early in my undergraduate career, but while I was an undergraduate student, I took a class online. It was a MOOC – massive open online course. So, there was one course that was about artificial intelligence, and it drew like 160,000 students or something. I was one of those many, many students. I liked the content of the course, but I also liked how the course was being delivered and the educational experience. I think that sort of seeded my interest, in some sense, in both AI and education.

I did an internship at Udacity, which was a company that put out that course. And at some point in that internship, I said, “I think I want to do this for my Ph.D. I want to study how to improve education with tools like AI and machine learning.” And so, that sort of started my experience.

And I didn’t know about intelligent tutoring systems – which Nia referred to – but when I actually started at my Ph.D. at Carnegie Mellon University, I realized, “Oh, people have been working on this for decades.” And then I learned about intelligent student tutoring systems and started working on them for my Ph.D. as well.

Capuano:

It’s nice for me to hear that you had “discovery moments” with the tools because they are ever- changing and, in the grand scheme of life, they’re still fairly new. So, it’s good to hear from who I see as seasoned experts in the field that you also had that new “ah ha!” moment and came to AI through kind of a genuine experience.

How are AI tools currently impacting teaching and learning, and what are some of the most promising applications that you’ve seen?

Doroudi:

It’s interesting. If you asked me this like two years ago, I would’ve talked about certain tools, but I think probably most listeners are aware that things have changed a lot over the past year with ChatGPT and generative artificial intelligence. Now, there are so many new tools that are popping up, so many new ways that people are trying to use it. And one hope I have is that people don’t forget that people have been actually working on this before ChatGPT.

There’s lots of things that we mentioned – intelligent tutoring systems. These are platforms to help students learn sort of in an individualized or personalized way to guide them through problem solving. So, there’s more traditional ones of those and then now, with ChatGPT, people are trying to create these chatbots that can help tutor students. And I think we’ll get to this a little bit later – there are pros and cons of the different approaches, and there’s things to watch out for. But yeah, I think there’s a lot of interesting tools being developed currently.

Nixon:

I completely agree with Shayan. If you walk away with anything from this conversation, it’s that this isn’t a new field. Decades of research have been put into using AI in educational context. And a lot of those sort of fall into three super broad categories of assessment – using AI to assess education in different ways. Personalization – so, intelligent tutoring systems is a great example of that. And then educational delivery, content delivery. But that’s definitely been incredibly transformed in the last two years by all of the things that he was just discussing.

One of the most promising things? That’s a huge question, and it’s really hard for me to even begin to answer because I also know that this is being recorded. So, I think what I think is the most promising thing in this moment today versus tomorrow will probably be different.

But I will say that I think the conversational aspects of these newer models – and the social aspects in the context of education – are huge. And what we can do with that – the human-like engagement that we can do – it opens the door for a host of different possibilities.

Capuano:

Professor Nixon, you just talked about the personalization aspect, one of the ways that AI tools are being used. How do they personalize learning experiences for students? How can they do that?

Nixon:

Right, great question. Historically, we’ve been able to sort of map out a student’s knowledge of a particular subject and then provide them with – or expose them to – different levels of difficulty in content as they navigate through any educational platform. So that means you – as a novice – I might unfold things in a personalized way for you to not overwhelm you and not have you disengage or become frustrated.

Another way is dealing with emotion. So, as I mentioned earlier, I started out in an affective computing lab and one of the huge things that came out of that is emotions are important for learning, which is odd that that’s a new kind of thing – relatively new – but when you’re confused or frustrated, you’re more likely to disengage than you when you’re in flow and everything disappears and everything is at the right level for you.

So, AI can be used to go, “Hey, I think you look a little confused. Let me give you a hint. Oh, it looks like you might have a misconception. Let me correct that for you.” So, you don’t slip into these unproductive states of learning – affective states of learning. So, those are two examples. There are tons more of how AI can be used to kind of personalize the learning journey for students.

Capuano:

What are the benefits and the potential drawbacks of that kind of personalized approach?

Nixon:

One of the drawbacks is our kind of over-reliance on technology. I struggle with this thought because it feels antiquated in some way because I feel like if you look in history, there was pushback on writing things down when we first started writing things down. There was pushback on the printing press. And there’s pushback here because we’re saying, “Oh, we’re over relying on technology and AI and we’re outsourcing so much of our cognitive abilities to AI.” But also, we got past all of these other obstacles and those weren’t actually very accurate. So, there’s a tension there, when I say about that being a drawback.

Doroudi:

I think one benefit is that teachers can’t necessarily give individualized attention to every student. So, if we are able to personalize experiences as well for individual students, they might be able to get sort of a unique experience that they wouldn’t otherwise be able to get in a large classroom.

At the same time, I don’t want to overemphasize that because I think there’s a lot of hype and a lot of companies will try to sell the products as doing this perfect kind of personalization, but we still haven’t figured it out really. And a good teacher can do certain things – or a good tutor can do certain things – that I don’t think we’ve been able to replicate with technology, with AI.

You know, we can personalize in certain ways, as Nia mentioned, but I think learning is very complex and this is something I’ve realized in my own research. I’ve tried to do some of this work, and I’ve realized it’s easier said than done, right? And so, learning is just very complex. And when you bring in the emotional pieces, the social pieces, like we don’t really know how to model all of that to know what’s the right thing to do.

And the technology’s limited by what it can do, whereas a teacher can say, “Okay, if this isn’t working, you know, let’s all just go outside. Let’s do something totally different.” And a teacher can come up with that on the spot. No AI tool that I know of is doing something like that.

With modern approaches now with these language-based tutors – these chatbots – they can seem like they can personalize very well, but they actually lack some of the rich understanding that Nia talked about earlier, like modeling exactly the kinds of knowledge that we want students to learn and knowing exactly what to do.

The way it’s approaching it is totally different. It’s doing it in a way that we don’t really – can’t really – predict what it’s going to do. And so, as researchers and educators, we don’t really know what it’s going to do. So, sometimes it’ll behave really well, and sometimes it might not – a lot of times it doesn’t actually. So, that’s one of the drawbacks to really be aware of.

Capuano:

You alluded earlier, Professor Doroudi, to some of the ethical considerations that go into integrating AI into education. What do those look like?

Doroudi:

Yeah, I think there’s a number of ethical considerations. One is data from students and data privacy issues. I’m not an expert on that but I think, “Where’s that data going? Who has access to it?” Sometimes these are companies that make these tools. What do they do with that data? Are they selling it to people or to other companies? And so, I think there’s lots of considerations there.

And another one that I’ve been interested in – in my own work – is this issue of equity. And AI has a lot of biases that when we fit models to data that can be biased in many different ways. And these biases sometimes, you know, it’s not that someone’s not well-intentioned. Sometimes we have the best of intentions, but now we’re sort of seeding some of our authority to this tool that we’ve developed, but we don’t really know what it’s going to do in all cases.

So, it might behave differently with different students from different backgrounds. For example, with ChatGPT or these language-based AI tools, they’re trained on data. And their data might be more representative of certain kinds of students and not others, right? And then when interacting with students who might speak different dialects or just come from a different cultural background from whatever cultural backgrounds were most representing that data, the AI might interact differently with those students in ways that we might not even expect ahead of time.

Capuano:

We’ve talked about some of the concerns that arise when we implement AI in the learning environment. Are there any that haven’t been mentioned yet?

Nixon:

When we think about what these systems could look like in a couple of years, they’re going to move from just focusing on cognitive development – or primarily cognitive development – to becoming multimodal sensing agents. And by that, I mean we can start to have rooms that can track your facial expressions when you move in and out of them and track different physical shifts as well as your language and discourse and use all of that for good, in one instance, where we’re saying, “Oh, we can track when a student is stressed out, or different social developmental things that would be helpful.”

I think another concern there is a different type of privacy that I don’t hear talked about a lot – beyond just the data privacy – but maybe we could call it like emotional privacy of students and sort of what we expect to be our internal states of being and being kind of exposed by these AI systems. And so, I think that that’s an interesting one – one I’m still percolating on. I don’t know how to best discuss it just yet, but I think that it will become, um, a topic of conversation moving forward.

Doroudi:

Yeah, there’s a lot of concern that these tools are going to be used for surveillance, right, and for ill intentions, right? Like, it might sound like, “Oh, this is great. We’re able to track all of these things. We have all these sensors in the classroom,” and it’s like, “Well, what are you doing with it?”

And as we’ve seen with, for example, like during the pandemic, a lot of universities and high schools, they were using these proctoring software and they would be using video data to see is the student doing something – misbehaving. At the beginning, they used some facial recognition software and sometimes some software wouldn’t detect students with darker skin tones and so there’s issues like that. And then sometimes the student might be doing something – maybe they have a tic or something – and the software would flag them as cheating, right? So, it’s like surveillance that really has negative repercussions, again, due to biases that I mentioned earlier.

Capuano:

With this increasing reliance on AI and education, how do we ensure equitable access to these technologies for all students, regardless of their socioeconomic background or perhaps their geographical location?

Nixon:

I think that kind of a task for policymakers, right? Prioritizing projects that are aimed at contributing to that – I think it’s a huge one. And – to some of Shayan’s concerns as well – we need policies in place to both protect students and ensure access to these things. And that’s kind of two sides of the same coin, right? We want you to have it and we want to protect you from it as well.

Capuano:

Looking ahead, what do you envision as the next breakthrough for the use of AI in education?

Nixon:

Forefront in my mind is something that I’ve been very fascinated by for the last couple of years – and that we actually have a collaboration going on around – is this idea of… well, I also want to give a shout out to an article called “Machines as Teammates” – it’s got like 12 authors. It’s a conceptual piece all around “what does it look like when we stop using AI as a tool?”

So, like Alexa or Siri, like, “Hey, do this for me, put this on my shopping list.” And it becomes something akin to you and I speaking right now. We treat it very much like another human. We engage with it, we help it, it helps us, we navigate and solve problems together in teams.

And so, I think – to your question – I think the next kind of big breakthrough, or one of the next big breakthroughs that we are working on, is imagining, or starting to study, AI as teammates. So, AI not as a virtual agent or a virtual tutor, but AI is another peer student trying to solve the problem alongside you with all the same and different, perhaps unique emotions and cognitions and things. So, I think that that will be interesting to see.

Doroudi:

I’m always wary to make predictions because it’s so hard, you know, I wouldn’t have predicted this sort of boom in interest in AI and education that came about when ChatGPT got released. But I think one prediction that I might make is that the future of AI in education is going to be bespoke.

By that, I mean that we’re not going to see like one killer app that everyone’s going to be using and it’s going to be used in all schools. That’s never really happened in the history of educational technology. So many people have talked about the promise of a particular application or a particular software or tool, and for a while there was a lot of interest in that, and then it sort of died out for various reasons.

But I think what we see now happening is that sort of the AI is being put in the hands of people who previously couldn’t create their own tools. Now they can sort of try to create tools with the AI, right? Through things like prompt engineering, you can prompt the AI to behave in a certain way. And as I mentioned – as we’ve discussed earlier – this has a lot of limitations. You can’t always expect it to behave in the way you expect, but now teachers can create a custom application for their classroom that was not really possible before. Or school districts can come up with custom applications with AI that, again, wasn’t really possible before – you know, a company had to develop something that many school districts would adopt.

So, I think we’re going to see a lot of these sort of custom tools being built by individual educators, by districts and various stakeholders, right? By students themselves, right? Students can create AI tools – that itself is an educational experience. So, I think we’re going to this sort of proliferation of different tools that we don’t even know, you know… as researchers, we won’t even know what’s being developed. But some of them will be working really well in some cases. Some of them might not. And then, hopefully they’ll move on and try something different.

Capuano:

What steps can educators and policymakers take to kind of prepare for whatever the next wave is? I mean, ChatGPT came in like a tsunami and washed over all of us and was a gigantic “wow!” And not knowing what’s next, is there anything that educators and policymakers can do to get ready for that?

Doroudi:

Yeah, that’s a tough one. I part of getting ready for the next step is really understanding what’s going on with what AI really is, how these tools work. And I think that speaks to AI literacy. You know, we talk about this for students, often. This has been a growing area: that students need to be literate about AI tools because they’re common society. So many jobs are now requiring them, otherwise they might displace people’s jobs – you know, a lot of the rhetoric that exists out there.

But I think teachers also need to be AI literate. They have to understand what these tools are, how they work, when they don’t work. And part of that AI literacy, I think, could be the more you have of it – if a new tool comes about, you can more quickly get up to speed with that, right? Rather than going from scratch to like, “Oh, I have to understand what this tool is entirely.”

So, if we work – you know, policymakers, researchers and educators – if we work together to increase efforts in AI literacy, both for students and for teachers and administrators, all the stakeholders, then I think people will have some familiarity with what these tools are and how they work. Just like people, hopefully, teachers already have familiarity with computers, the internet, these tools. So, if new things come about, they can adapt to these things.

But with AI because it’s sort of a little bit more foreign and people don’t have a good sense of what’s happening behind the scenes, I think there needs to be more work developing that. And that’s one thing we’re doing right now in my lab with a Ph.D. student of mine: we’re actually trying to survey undergraduate students to see how much literacy they have about these new generative AI tools and what some common misconceptions might be. So that’s the first step, understanding what people already know and what they don’t know, and then working to address those barriers and challenges.

Nixon:

I couldn’t agree more. So, policymakers can support efforts for AI literacy. One of my classes I teach is called “21st Century Literacies.” And in that class, we cover collaboration, communication, creativity – all of these things that have become increasingly important in the 21st century – not that they weren’t important before – but as we’ve moved from an industrial, individualized sort to more collectivist working environments, collaborative working environments, I think AI literacy is just as, if not more important, than all of those. And I’ve started to integrate that into the classroom because it’s so critical for students and teachers to have some type of foundation to navigate because I feel like a lot of the flailing that you might kind of see right now in education and AI is a lack of education around AI and/or misinformation around it. And so, addressing some of those is going be great moving forward.

Capuano:

Is there anything that either of you wanted to share that I didn’t ask about that you thought, “This is something I want to make sure I bring to this conversation and share with the audience?”

Nixon:

Maybe a closing point is there’s been a lot of discussion around the pros and cons of AI and education and some people just trying to shut it down initially, or shut down the most recent wave, completely remove it from the classroom. And I don’t think that that is a realistic approach or a helpful approach. I think this ties nicely into the AI literacy where this is not a switch that we can turn off. We are here, for better or for worse. And I think doing rigorous research around a lot of the topics that we discussed today is how we move forward, combined with educating students and teachers, and learning how to use this to our benefit. Instead of being fearful of it.

Doroudi:

One thing I’d like to add is we’ve been talking so far about AI tools, right? And AI – for practical purposes – how it’s going to be used, for better or for worse, in classrooms. But one focus of my research has been AI not only as sort of this practical tool, but as this lens to understand human intelligence and ourselves as people. And that was actually really the quest for developing AI in the early days – was really focused on developing tools that that could help us understand how the mind works from a cognitive science perspective. And so, I think that’s sort of been … I wouldn’t say completely forgotten. There are still people thinking about that, but I think it’s been largely abandoned because AI has become so powerful as a tool that people just focus on it as like, “What can we do with it?”

And the AIs that we’ve developed have looked very different from people. So, I think because of that, people have just sort of moved away from that. But I think thinking about how AI can help us understand ourselves better, and this has a lot of educational implications. A lot of those early researchers were interested in, “Well, how can we understand how people learn and then use that to improve education?” And I think there’s a lot of opportunities there. With some of the new tools, for example, a lot of people talk about how, “Oh, these tools are amazing! They seem to show aspects of intelligence, but they also have these weird behaviors that are very not human-like.” So, by reflecting on these tools – by reflecting on things like ChatGPT – we can think about, “What does that tell us about ourselves as people?”

And how can students engage in experiences with these AIs to understand what makes us distinctly human in a sense? And one project we’re trying to get started on this, I’m collaborating with UCI philosophy professor Duncan Pritchard – who was actually a previous guest on this podcast – and we’re thinking about how what AI can tell us about intellectual virtues and how children interacting, or youth interacting, with AI can learn more about the importance of intellectual virtues, which AI, I would say, does not have.

Capuano:

Yes, there’s a whole “Anteater Virtues” that Professor Pritchard is in charge of. Thank you both so much for joining us today to share your in-depth knowledge about AI and education.

Doroudi:

Thank you for having us.

Nixon:

Thank you for having us.

Capuano:

I’m Cara Capuano. Thank you for listening to our conversation. For the latest UCI News, please visit news.uci.edu. The UCI Podcast is a production of Strategic Communications and Public Affairs at the University of California, Irvine. Please subscribe wherever you listen to podcasts.