Skip to Main Content

Artificial Intelligence and AI Writing

This guide presents information about AI generated text and language models, how they work, how they don't. There are also resources for best practices in the classroom, and insights into AI plagiarism and detection.

Student Resources

Evaluating the Reliability and Authority of AI-Generated text and media

  • Who is the author? Could their view be biased in any way?
    • Text or images generated by AI tools have no human author, but they are trained on materials created by humans with human biases. Unlike humans, AI tools cannot reliably distinguish between biased material and unbiased material when using information to construct their responses.
  • Who is the intended audience?
    • Generative AI tools can be used to generate content for any audience based on the user’s prompt.
  • What is the intended purpose of the content? Was it created to inform, to make money, to entertain?   
    • Generative AI tools can create convincing text and images that can be used to propagate many different ideas without being clear that the information or images could be false.
  • Where was it published? Was it in a scholarly publication, a website, or an organization page?
    • Generative AI has already been used to create content for websites and news outlets. Considering whether the source is scholarly, has a good reputation and a clear history of providing reliable information is useful for figuring out whether the information you find is useful or misleading.
  • Does it provide sources for the information?
    • Articles, news outlets, and websites that provide sources could be an indicator of reliability. Further assessing the sources by following the links and citations to verify the information will help confirm that the information you find is reliable. 

Limitations of AI: Hallucinations and Fake News

"ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it's a mistake to be relying on it for anything important right now." - Sam Altman, CEO of OpenAI, Dec 10, 2022 via Twitter @sama

Generative AI natural language processing tools, language models, or chatbots like ChatGPT have been shown to hallucinate, or provide completely unsubstantiated information. Text generated by AI can also seem very confident, so it can be very difficult to ascertain what information generated by AI is trustworthy and what information is not.

Since AI systems are developed by humans and trained on human language, they can never be fully neutral. For example, ChatGPT's default tone and style tend to replicate US norms of "professionalism" that privilege some vocabularies and grammars over others. And it's trained to avoid giving bigoted or sexist answers—but in doing so, it's using parameters for bigotry and sexism that were developed by humans. When using ChatGPT and similar tools, it may be helpful to assume these types of bias exist and be on the lookout for them.

To learn more you can read the basics about AI hallucinations here. To see an example of just how far a generative AI can start hallucinating off the deep end, read about the Bing chatbot that tried to convince New York Times columnist Kevin Roose in January of 2023 that he should leave his wife and be with the chatbot instead. To help understand the importance of checking your work to avoid the misinformation that can be generated from AI hallucinations, read about six fake cases created by ChatGPT in the Steven A. Schwartz and Peter LoDuca court case first reported in May 30, 2023 and followed up on in June 26, 2023 by reporter Lyle Moran at Legal Dive.

Many of the same lessons learned when discerning fake news from legitimate sources can help when interacting with AI generated content or determining whether any website should be trusted. Tredway Library also has a few resources to help you to employ fact checking strategies like lateral reading (Tredway tutorial guide) and other source evaluation methods (FYI resource guide) to verify information from AI generated content.

(Adapted from AI Tools and Resources/University of South Florida Libraries)


In short:

Prompt Engineering

Good prompts are the key to using AI tools effectively. The following "Role Task Format" or "RTF" framework can assist you with optimizing the outcomes of using a generative AI tool:

Essentially, to create a good prompt you should: 1) indicate the Role to adopt when responding, for example: critic, expert, inventor, marketer, journalist, copywriter, etc.; 2) state the specific Task required, for example: e-mail, article, speech, summary, business plan, brainstorm ideas, step-by-step guide, outline, pros and cons analysis, etc.; 3) define the desired structure of the output Format, for example: table, graphs, diagrams, checklists, code, images, bullet points, text, etc. (Adapted from Jeffrey Zheng on LinkedIn, May 2023, image from the June 22, 2023 Alchemy webinar "ChatGPT Unleashed: What to Expect This Fall and How to Prepare" webinar offered by Alchemy on June 22, 2023 slides")


"Prepare, Edit" is another prompting strategy from Dan Fitzpatrick's TheAIEducator.io.


The following video from App of the Day also has a number of useful insights regarding prompting, using a similar "prompt formula" of [Context] + [Specific information] + [Intent] + [Response format] = Perfect prompt: https://www.youtube.com/watch?v=pmzZF2EnKaA.

Also see this advanced Prompt Engineering Guide from DAIR.AI for more ideas.


Zero-shot vs multiple-shot prompting

  • Zero-shot: prompt with zero examples
  • One-shot: prompt with one example
  • Multiple-shot: prompt with more than one example

Providing examples gives the platform more context for what you want.


Chain of Thought Prompting

When you divide single tasks into manageable steps, you help the LLM produce accurate and consistent results.


Natural Language Prompting

Reports suggest that using polite language with AI models like ChatGPT produces more effective results! Consider this when prompt engineering. The better, more clear, more precise your own writing is in the first place, the better results you tend to get from language models.

Other natural language prompt techniques include using phrases like: "Be concise", "Take a deep breath", "Ask me questions", "Explain your answer", or "This is important for my job" -- because LLMs are trained on human created data, phrases like these can prompt specific types of responses.


Personalized Learning Experiences
AI can assist with personalized learning experiences to cater to individual college students' unique needs, learning styles, and preferences. This approach helps students grasp concepts more effectively, stay engaged, and progress at their own pace, fostering a more efficient and satisfying learning journey.


Research Assistant
ChatGPT can assist the research process in the initial stages such as idea generation, research question suggestions, and offering contextual information. With its ability to connect seemingly unrelated concepts and provide fresh perspectives, it becomes a valuable tool for researchers.

Again, be mindful of AI's limitations. Check for information accuracy and bias, as well as reliability of the resources. AI is known to "hallucinate" and make up resources that do not exist. Never ask AI if its resources are reliable, because it doesn't really know. Ask librarians and other professionals, when needed.

Read more at College Vidya about tips on how to use ChatGPT as a student.


Language Learning and Translation
AI-powered language learning apps provide interactive courses personalized to each student's proficiency. Through AI algorithms, these apps assess language skills, adapt content, and offer real-time feedback. Moreover, AI-driven translation tools enable seamless cross-cultural communication and access to international resources by overcoming language barriers. Be wary of leaning too hard on translation, but AI chatbots may provide a good sort of pen-pal to practice your writing in another language!

Read more at TalkPal about AI language learning tools.


Again, always check with your instructor for their policy on the use of generative AI tools.

Do I need to cite ChatGPT or other AI tools?

Short answer, yes.

The long answer (adapted from the University of Minnesota), is that the general practice of citation is that you cite anything that comes from somewhere else; anything that isn't your original thought, isn't common knowledge, and/or is a place where you pulled information from.

Where an assignment requires an AI source to be cited, you must reference all the content from the tool that you include in your assignment. Failure to reference externally sourced, non-original work can result in a violation of the college's Honor Code. References should provide clear and accurate information for each source and should identify where they have been used in your work.

The following tool may be useful for creating ChatGPT or Bard citations:


Some examples of common reference styles citing ChatGPT may look like this:

MLA

"Text of prompt" prompt. ChatGPT, Day Month version, OpenAI, Day Month Year, chat.openai.com/chat.

  • Works Cited example: “Tell me about confirmation bias” prompt. ChatGPT, 12 Apr. version, OpenAI, 12 Apr. 2023, chat.openai.com/chat.
  • In-text citation: (“Tell me about”)

APA

OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

  • Parenthetical citation: (OpenAI, 2023)
  • Narrative citation: OpenAI (2023)

Chicago Manual of Style

Example 1: prompt included in text

  • 1 Text generated by ChatGPT, March 31, 2023, OpenAI, https://chat.openai.com/chat.

Example 2: prompt not included in text

  • 2 ChatGPT, response to “Explain how to make pizza dough from common household ingredients,” OpenAI, March 7, 2023.

See the "Citing Generative AI" tab under the "Tredway Resources" page on this guide for more information about citing generative AI tools using specific reference styles, or the Tredway Library Citation Guide for information about citations more generally.