ChatGPT, Generative AI, and the Library
By: Amy Trost
Since its release at the end of 2022, ChatGPTnew window has received considerable attention, particularly in academia. Due to its many implications in this area, MC Library has been thinking and learning about how this and other generative AI tools will affect research and information literacy. Here is what we have learned so far.
What is Generative AI and ChatGPT?
Generative artificial intelligence (AI) is just like it sounds: artificial intelligence that can create and produce content such as text, imagery, audio, and data. ChatGPT, an acronym for Chat Generative Pre-Trained Transformer, is a type of AI-powered language tool that uses natural language processing to create humanlike responses and dialogue in response to questions. It can compose written content such as articles, essays, social media posts, and emails. It has opened up a wealth of opportunities and concerns.
How are librarians, faculty, and students using generative AI programs?
- Brainstorming. Generative AI can act as a “thought collaborator,” particularly in the early stages of research. The tool creates extensive lists and can guide the user in exploring research topics. Within limits, it can also suggest different types of research sources. Smart prompt engineering1 can make this process more efficient and increase information literacy learning.
- Interpreting sources. ChatGPT is not designed to interpret specific sources. But Humata.ainew window, another large language model, will analyze the content of a written document and allow users to ask questions about the content. It can generate concise summaries of esoteric research articles.
- Coding. Since it is a language model, ChatGPT can be a skilled programmer with the right prompts. Depending on the way this capability is used, it can either improve productivity or enable plagiarism.
Where has Generative AI fallen short?
- Plagiarism. Generative AI can be cited in a reference list2. However, the risk of plagiarism makes this a difficult tool for students to use. Students can unintentionally commit plagiarism, and they may not be aware that ChatGPT and other tools need to be cited.
- Fabricating sources. ChatGPT often generates plausible but false sources3. We have encountered this in library chat: a student or faculty member will ask us to find a source and it can take some time for us realize that the source doesn’t exist. This happens because the model is not searching through a list of information for its results. Instead, it’s generating them on the fly based on a set of language-based rules.
- Hallucinations. With more complicated questions—and sometimes even with simple arithmetic—ChatGPT can begin to fail. Some of these hallucinations are obvious and have received public attention4. There are also many subtle failures that occur with more complicated questions. Recently, I asked ChatGPT to describe how the New York Times reported on the January 6 attack on the Capitol; the model began to contradict itself after a few questions.
MC Library plans to continue researching and experimenting with these tools as they evolve. Please feel free to contact a subject librarian for help exploring these issues with your students.
[1] For an example of this, see Lo, L. S. (2023). The CLEAR path: A framework for
enhancing information literacy through prompt engineering. The Journal of Academic
Librarianship, 49(4), 102720. doi:10.1016/j.acalib.2023.102720
[2] See https://apastyle.apa.org/blog/how-to-cite-chatgptnew window and https://style.mla.org/citing-generative-ai/new window
[3] See the attached PDF (PDF, ) for an example of a research paper generated by ChatGPT with fabricated sources.
[4] See https://montgomerycollege.primo.exlibrisgroup.com/discovery/npsearch?vid=01MONTGOMERY_INST:MC&query=any,contains,chatgpt%20hallucination&search_scope=all&pcAvailability=false&offset=0new window