Skip to main content
Staff homeUCI News home
Story
2 of 50

The intersection of AI and scholarly values

Anteater Insider podcast explores how these tools can enhance research and teaching practices

Though becoming an integral part of our lives only recently, artificial intelligence has a history dating back to the mid-1950s. Its application in education emerged during the 1960s.

Rapid advancements in AI technology have, in some ways, distanced us from harnessing its impact on our lives. There appear to be more questions about using AI than there are answers. However, scholars and students must acknowledge that AI is here to stay at the university, and with that comes the responsibility to use AI to advance scholarly values.

In this edition of the Scholarly Values Anteater Insider podcast, Duncan Pritchard, distinguished professor of philosophy and Chair of the Year of Scholarly Values Committee, will discuss the intersection of AI and scholarly values and how they can enhance research and teaching practices. Joining him is Shayan Doroudi, an assistant professor of education. Doroudi studies the foundations of learning about learning and how they relate to the design of socio-technical systems, including the use of AI, that improve learning.

To get the latest episodes of the Anteater Podcast or The UCI Podcast delivered automatically, subscribe at Apple Podcasts or Spotify.

TRANSCRIPT

Duncan Pritchard:

One of your areas of inquiry is the design of systems that improve learning. How can such systems use AI, and what should we be aware of when using AI in this way?

Shayan Doroudi:

As you alluded to in the introduction, AI has been used for a long time. A variety of systems have been developed over the past few decades for various purposes, including trying to give students a personalized learning experience. But maybe we want to focus more on the recent use of AI, which is generative artificial intelligence, things like ChatGPT – models that can generate text, images, videos, and all sorts of things. There are many ways of using these systems that could, in theory at least, advance learning or design to advanced learning.

There are a variety of applications. One popular one is a system that tries to tutor students or give students a kind of personalized educational experience, like a chatbot a student can talk to to learn new things, dive deeper into subjects, or get feedback.  What we should be aware of when thinking about these applications is that these systems don’t know what the truth is. And they’re not optimized to be truthful. In many cases, they might be accurate and truthful, and in many cases, they’re not. It can be difficult for a student, especially one who’s a novice, learning a new area to necessarily disentangle those cases.

I think, and maybe we’ll dive into this later, those kinds of applications have this concern. There are other applications though, where truth is not as important for the system. The system isn’t designed to tell the student what’s right or wrong, what’s true or not. The system could be a tool that encourages students to reflect on something and prompts students to think about what it’s saying. And if it’s prompting the student to think about various things, then the student isn’t just taking whatever the AI is spewing out as truth. Thinking about some of those applications is interesting. We’re thinking about some in our lab, about how we can design AI systems where even if they’re saying some things that are made up, that’s OK because the student isn’t treating that as the truth.

Duncan Pritchard:

Interestingly, you suggest that the students sometimes struggle to discern the difference between what’s being offered and what’s not. Does that mean that when scholars use AI, they’re better placed to make use of it? Traditionally we’re experts, and our expertise means that we already are in a position to evaluate the content that’s being given to us.

Shayan Doroudi:

Not always. Experts are not always able to tell it as well. I mean, maybe we should be able to tell, right? Especially if it’s an area that we have expertise in. But I think one of the concerns is that we might just go with what the AI is saying if we’re not being careful. If we’re not being careful about what it’s telling us, we might miss certain things. I’ve seen examples where experts would write a paper, and there’s a citation in there. For one paper, when I looked at that citation, I was curious about it, but I could not find the citation anywhere. I reached out to the authors, and it was clearly accidental that the citation got into the paper, but it somehow got in. That’s particularly concerning.

Duncan Pritchard:

I gather that there some of these fictional papers are getting quite a lot of citations. They’re appearing on Google Scholar and places like that. There are lots of ethical issues raised by AI in the educational context. We just explored one of them just now. One thing I want to focus on is the question of how, as students, teachers and scholars, we go about using AI with integrity. 

Shayan Doroudi:

I think it’s a big one and one that’s been on my mind lately, as an instructor and as a researcher who’s interested in this. Maybe the first step is differentiating between people who want to go about this with integrity and those who don’t. We see this a lot with students. There are a lot of discussion about students who may be intentionally trying to use these tools without integrity. I think the latter is particularly concerning for you.

It’s not just students, right? Instructors, too, might be using these tools to pass off things that are not their own and attribute them to AI. The question becomes, how do we change the culture so that people are less inclined to do that? A lot of the work that you’re doing with scholarly virtues and getting people to think about the importance of integrity in their day-to-day activities is very important. And we don’t have any easy answers here. It’s not a question of AI. AI is accelerating how people can be less integrous.

For those of us who espouse this ideal of integrity, we just have to be more careful. Sometimes we might convince ourselves that we wrote something when we didn’t even, right? We might forget, as I have mentioned, a proper citation appearing in a paper. In that particular case, I don’t think the researchers were being disingenuous or intentionally trying to pass something off as their own. I think it was just a lack of being careful. And another case that’s sort of using AI that might seem totally innocuous is to help improve your writing. I’ve seen this with students who may be non-native English speakers. It can be very powerful to use AI with some of your writing to make it sound better, right? Where do you draw the line between improving my grammar and improving my writing so that this text is now actually co-written within AI? It’s no longer my voice, as a scholar or as a student. I don’t know the answer there, but I think we need more research, and maybe some philosophers need to chime in on this to decide how much is it OK to use AI to adjust your own words and still view it as your own words.

Duncan Pritchard:

What scholarly values do you think might be impacted or otherwise influenced by AI?

Shayan Doroudi:

One that we alluded to earlier is intellectual carefulness, just being careful with what you’re doing so that what you’re putting in a paper is your own. And also vetting the sources of the information being generated by AI, making sure it’s accurate. One of the things I’m particularly concerned about is that researchers, students and scholars might be using ChatGPT to search for information and not realizing that that information could be inaccurate or not going through all the steps to make sure it’s accurate.

But I think an even bigger concern might be people who think, I’m not going to use AI. I know it’s problematic, but that’s not an option anymore. If you just go on Google, you’re getting these AI overviews, right? And I am concerned that many people who do a quick Google search, might just go with that information and and assume it’s true. I’ve run into many cases personally where the information I get is not true. I find even for myself, and even though I’m very aware of these issues, there are times when, you know, I just look at the answer. I have a quick sense, and I have to stop myself and say, do I need to take some additional steps? Do I need to go look at some additional links to make sure that’s accurate? Or at the very least, click on the citation and this AI overview is giving and see what it says on that website. In some cases, it contradicts what the website says. I think following through with those steps is important.

I think with these new digital tools and with the prevalence of misinformation and AI-generated information, some scholars may not have been trained to vet all of these sources. And I think that’s very important. Another issue is intellectual autonomy, you know, thinking for ourselves. What if we use AI just to brainstorm ideas, but then we write things in our own words? Did we take a shortcut that led to losing out on an important part of the cognitive process leading to whatever we ended up doing? I’m concerned that even if students are trying to do this with integrity, and scholars as well, we might be taking these shortcuts too much. We might become too reliant on the AI, and we’re losing out on the process, which is really what education is really about.

Duncan Pritchard:

One thing we emphasize in the Anteater Virtues project that I run is that the value and the joy of education is lost if you cheat and if you take these shortcuts then you miss out on these moments when one can get the better of education, which is to cultivate your intellectual character. When you take these shortcuts, you enter a vicious cycle because it degrades your intellectual character. It makes you more likely to take shortcuts and so on. Whereas if you get into a virtuous cycle and you don’t take the shortcuts, then you can start to get the joy of learning and feel your confidence grow and your intellectual character develop. It does worry me when students get the grade but didn’t learn anything. They’ve missed out on something very important and of personal benefit to them.

Education is not just about grades. I want to ask about how we ensure the accuracy and reliability of AI-generated content. You mentioned carefully checking the content and so on. One of the problems with that, of course, is you need a bit of expertise. And some people don’t have that. Often when we check things, we don’t go back to the same source to check it. But when they do, how do they check it? Well, they Google something, and then they’re still at the level of a Google search engine. They’re not getting an independent source. So what other things can we do to check the reliability of this content?

Shayan Doroudi:

There’s probably important work in information literacy and experts who hopefully have some tips for us. But I think you’re right. I think one of the research would suggest that we should go to other sources. We shouldn’t just go to the citation that it’s giving and see if it’s accurate. That citation could be a website that’s not accurate to begin with. Either click on some of the other links and see what answers they’re providing. If you’re seeing inconsistent answers, you could go to books and things like that.

But even if it’s online content, and if you’re drawing from a diversity of sources, you’re more likely to see if there’s a consensus on different websites on something. For example, websites might be coming from different parts of the political spectrum. Let’s say it’s a political issue and they agree on something, that’s a pretty good indicator that it’s accurate. But if you find that sources are disagreeing or you re-prompt the AI and it’s disagreeing with what it said previously, then that’s where there should be a red flag. I don’t know if this is exactly what you’re asking, but as far as ensuring reliability, one could ask if there is a way to make the AI accurate to begin with.

It’s a big topic AI researchers are working on, and there’s no way with this kind of current AI built on generating text to guarantee that it’s going to espouse truth. Or to be accurate. We can improve it here and there, but we can’t put guarantees on it. That’s important to be aware of. There are some other techniques like retrieval augmented generation, where you’re not just relying on ChatGPT. But ChatGPT has access to whatever AI you’re using to access to some documents. An instructor, for example, might be able to create AI apps or some websites that let upload reference materials, whether it’s your syllabus or some papers and some textbook pages and the AI is more likely to draw from. Can it still make something up or hallucinate? Yes, absolutely. But it’s less likely to do that, though. It’s more likely to draw from that information and say things that are consistent with what’s there. So, you should still be careful. But as an instructor, if you’re trying to have your students use an AI that’s more grounded in your materials, that’s one way of doing that.

Duncan Pritchard:

That’s really interesting. There’s been a lot of discussion about the biases that get into the AI algorithms. Can you tell us a little bit about that discussion and how we go about mitigating against them?

Shayan Doroudi:

There are various reasons these algorithms could be biased. One of the clearest is that the data that they’re trained on is biased. They’re being trained on text generated by humans. Humans are biased, and those biases will be reflected in the text that gets generated by AI. We don’t always know what these biases look like, and we don’t always know what to expect. In some cases, it’s not just that. Some of the text on the Internet has lots of biases. Sometimes it’s also how much text in the training data reflects different cultural backgrounds or things like that.

For example, these models have been trained on much more English text than text from some languages that fewer people around the world speak. They’ve been trained on more text that reflects cultural values in the West rather than cultures in other countries. And so those biases will trickle in as well so that when you prompt the AI, it might give an answer that in some nuanced ways might reflect cultural norms that are from European countries or the United States and not other countries. People are trying to improve the models and to get them to be less biased.

Sometimes these AI models will not answer something if it’s problematic. But there are still ways for people to get it to do that and get it to give quite problematic answers. I don’t think there are any easy solutions, right? I think, again, with a lot of these things that require humans to be thoughtful by having some of those scholarly values, we can make sure we’re vetting the information given by the AI for foreign biases and other various biases the same way that we might with people. When we interact with people, we have to be careful of their biases. AI’s biases might not be the same as people’s, so we should be careful about that. But we need to be cognizant that those biases will exist.