Artificial Intelligence (AI)

AI Definitions

AI Term

EXPLANATION

Artificial Intelligence (AI)

  • Computer systems designed to perform tasks associated with human intelligence, such as pattern recognition or decision making1
  • AI dates back to the 1950s2. Examples of how we see AI in our daily lives include:
    • Recommendation engines (like Netflix and Spotify suggestions), facial recognition on your phone, GPS navigation, email spam filters, search engines1,3 

Generative Artificial Intelligence
(GAI, genAI)

  • A subfield of Artificial Intelligence, referring to models capable of generating content such as long-form text, high-quality images, realistic video or audio and more in response to a user’s prompt or request1,4 
  • Since the arrival of ChatGPT in 2022, there has been a surge of AI innovation and adoption5

LLM
(Large Language Model)

LLMs are an example of generative AI

  • They are trained on very large volumes of textual content that allows them to predict what word should come next in written text, like autocomplete feature in search bars2,6 
  • These models do not think or feel like humans do, even though their responses may make it seem like they do. When you type something (called a prompt) into ChatGPT or another LLM, it tries to extend the prompt logically based on its training2,6 
    • For example, because the word sequence “thank you” is far more likely to occur than “thank zebras,” a person’s query to an LLM asking it to draft a thank-you note to a colleague is unlikely to generate the response “thank zebras”2
    • Examples of well-known LLMs: OpenAI’s GPT models (e.g., GPT-3, GPT-3.5, and GPT-4), Anthropic’s Claude, and Google’s Gemini
    • LLMs don’t have real understanding and often make mistakes, so it’s up to the user to verify their outputs6

Specialized foundation models:
Image generation
Examples: OpenAI’s DALL-E 3, the open-source Stable Diffusion, Google’s Imagen, Adobe Firefly, and Meta’s Make-A-Scene

Audio generation
Generate high-quality speech, sound, and music, surpassing leading methods in tasks such as text to speech, speech enhancement, and voice conversion. Example: UniAudio

Video generation
Example: Meta’s Emu

Multimodal models
AI systems that incorporate text, images, and sound within single models
Examples: 

  • Enhance accessibility through real-time transcription, sign language translation, and detailed image descriptions
  • Reduce language barriers through near-real-time translation services
  • Support personalized learning by adapting content to different formats, such as virtual and augmented reality. Example: healthcare training environments2

Chatbot

A program that communicates with humans through text in a written interface, built on top of a large language model. Examples include ChatGPT by OpenAI, Gemini by Google, and more. While many people refer to chatbots and LLMs interchangeably, technically the chatbot is the user interface built on top of an LLM1

Prompt

Text written by a human that is given to a generative AI model. The prompt often describes what you are looking for, but may also give specific instructions about style, tone, or format1

Hallucination

A falsehood presented as truth by a large language model. For example, the model may confidently fabricate details about an event, provide incorrect dates, create false citations, or provide incorrect medical advice1

 AI resources available through Greenley Library:

AI Risk

Explanation

Overreliance & overtrusting AI

  • Overreliance on AI could negatively impact students’ critical thinking ability, learning, retention, writing development, creativity, and overall intellectual growth7,8,9,10,11, 12
  • The authoritative voice of AI chatbots could lead students to believe that all AI content is trustworthy. It is important to have skepticism of AI-generated output2,9

Hallucinations

AI models can generate results or answers that seem plausible but are completely made up, incorrect, or both2

Bias and fairness

  • Algorithmic bias occurs when an AI system produces systematically prejudiced results due to biases in its training data, design, or implementation.
  • Algorithms learn from historical data, so if the training data contains biases (e.g., gender, racial, or socioeconomic biases), the outputs can replicate or even amplify these patterns
  • Choices developers make in selecting features, labels, and model parameters can inadvertently introduce further bias. This can lead to unfair or discriminatory outcomes, often reflecting existing social biases.
  • The nature of the training data not only influences bias but also shapes the model's understanding of language, historical facts, and social norms. For example:
    • What counts as casual, informal, or academic language,
    • What historical facts should be furnished in response to user questions,
    • Which facial features (e.g., nose shape, lip shape, eye color), skin colors, and hair textures are regarded as beautiful, handsome, or trustworthy.
  • Algorithms, such as LLMs, that adapt over time based on user interactions may reinforce biases. 
    • For example, a recommendation system might continually suggest content that aligns with existing user preferences, narrowing the diversity of content exposure3

Video: AI & Biased Data Sets
"If we widely assume that AI models provide correct information when prompted, then these forms of bias can reproduce inaccuracies and prejudices without us even knowing it" -Dr. Joy Buolamwini

Deepfakes

AI provides the capability for generating highly realistic but entirely inauthentic audio and video2

Privacy

Many LLMs are trained on data found on the internet rather indiscriminately, and such data may include personal information of individuals2

Vulnerability to spoofing

It is possible to tweak data inputs to fool many AI models into drawing false conclusions2

Explainability

The ability to explain the reasoning behind an AI system’s conclusions. Today’s AI is largely incapable of explaining the basis on which it arrives at any particular conclusion2

Copyright violations

AI models trained on large volumes of online data are generally used without consent or permission of their owners2

Environment

Training and operating large AI models, building data centers, and manufacturing specialized hardware for AI can consume large amounts of water and energy, contributing to carbon emissions. Water resources that are used for cooling AI data center servers can no longer be allocated for other necessary uses13

Also see IBM’s AI risk atlas

Appropriate ai use

Can I use AI for my coursework?

AI policies will vary between professors, courses, and by specific projects and assignments. Some faculty members will encourage or even require you to use AI, while others will prohibit it. 

  • Those decisions are based on the learning goals for the course. Keep track of the policies for each course and assignment so you don’t get confused. 
  • Read the syllabus carefully and if AI isn’t mentioned, ask the professor. It is important for you to know if AI is allowed in your course, which ways of using AI are permissible and which are not14, 15

Questions to consider for appropriate AI use

 

Before you start:
  • Does your professor allow AI use for this assignment?
  • Do you understand when and how you are allowed to use AI for this assignment?
Doing the work:
  • You are using your own thoughts, words, and tone of voice
  • You have fact-checked AI-generated information by locating and citing original sources for facts, statistics, or quotes
  • You have analyzed AI output and identified false, biased, or harmful information
  • You have documented where and how you used AI according to your professor’s expectations
When the assignment is complete:
  • You can explain your findings and demonstrate full understanding without the aid of AI
  • You can provide what sources you used and how you verified the information14

Some examples for AI uses

(depending on what your class allows)

  • Brainstorming research topics
  • Formulating an effective search in library databases (Boolean search)
  • Generating feedback on coursework
  • Learning and mastering content (be sure to read original sources to verify facts)
  • Organizing roles & tasks for group projects
  • Translations
  • Creating images14,16,17,18,19
  • Be cautious about the information you enter into AI tools
  • Prompts that you enter into AI tools can be used to train the model and might be used to form future responses for others
  • When creating AI prompts:
    • Avoid inputting sensitive data (personal information, unpublished data)
    • If inputting low-risk data, think about if you would want it to be public
    • Don’t input data about others that you would not want them to input about you20

Consider turning off data collection when you are using AI tools (example: how to change data controls settings in ChatGPT)

AI & Information Literacy

Information provided by AI can be incorrect and it can create hallucinations (when false or incorrect information is provided by AI, but in a very convincing manner)20

  • It is important to use information literacy skills to evaluate AI output

How to Fact-check AI

Break
down the information

  • AI output is not a single source of information, but rather a combination of multiple sources that could be both factual and false
  • For this reason, it is useful to break down the information by isolating specific, searchable claims that can be evaluated independent of each other
  • This is called fractionation

Lateral reading

Consult other sources to verify the information

  • Open a new tab outside of the AI tool to determine whether credible, non-AI sources can confirm the information
  • Examples: Google search, Wikipedia, FSC Library Search
  • Make a judgment call. What here is true, what is misleading, and what is factually incorrect? Can you re-prompt the AI to try and fix some of these errors? Can you dive deeper into one of the sources you found while fact-checking? Repeat this process for each of the claims the AI made.

Check for hallucinated references

If you prompt AI to provide citations and references for its information, it may generate inaccurate (but convincing) sources that may not actually exist20

Use Google Scholar or the FSC Library Search

  • Search by putting the title of the resource in quotes, or
  • Locate journal articles by searching for the journal title, then look for the volume/year and issue numbers listed in the citation

More things to consider

  • Is the AI putting correct information in the wrong context, such as attributing a fake article to a real author?
  • Who would know things about this topic? Would they have a different perspective than what the AI is offering? Where could you check to find out?21

Videos explaining AI fact-checking and lateral reading

The way you write prompts shapes the AI’s output

  • Prompt engineering involves selecting the right words, phrases, symbols, and formats to get the best possible result from AI models22
  • Prompt iteration is the process of refining and improving prompts by creating initial prompts, evaluating their effectiveness, and making changes to enhance the output quality23

Tips for effective prompting

Effective prompts are straightforward, to the point, and include all necessary details without excess information

By being clear about your requirements, concise in your language, and specific about your expectations, you dramatically improve the quality of results

  • Specify desired outcome: Elaborate on details, examples, or steps
  • Include relevant context: Add references, time periods, or nuance in your prompts
  • Provide clear instructions: Write prompts like a teacher assigning tasks
  • Iterate and refine: Rephrase prompts if the output isn’t satisfactory24
Examples of direct verbs Examples of indirect verbs

These clearly communicate specific actions and expectations:

List - provides a clear explanation for an itemized response

Explain - Requests a detailed clarification of a concept

Calculate - Asks for a specific numerical result

Describe - Requests detailed characteristics of something

Identify - Asks to name or recognize specific elements

Show - Requests a visual or clear demonstration

Detail - Asks for comprehensive information on specific aspects

These are vague and leave room for interpretation:

Help - Doesn’t specify what kind of assistance is needed

Discuss - Too open-ended without clear direction

Consider - Doesn’t indicate what action should follow

Explore - Lacks boundaries and specific outcomes

Understand - Doesn’t specify what should be done with understanding

Deal with - Unclear about the expected approach

Look at - Doesn’t indicate what to look for or what to do after looking24

Example of a vague prompt improved version

Prompt:
“You are a math tutor” 


Why it’s weak:
This prompt lacks specifics about teaching approach, student level, mathematical topics, or expected format. It gives the AI too much room for interpretation.

Prompt:
“You are a math tutor that explains the process of solving math problems to a college student using clear and simple language, starting by defining what the problem is and following with a step-by-step method to solve it, including how to check the solution.”

Why it’s strong:
This prompt clearly defines the role, audience, approach, structure, and verification method for every response24

provide specific context by defining the AI'S ROLE, TARGET AUDIENCE, SPECIALIZED KNOWLEDGE, AND DESIRED TONE

Define a precise role

A well-defined role helps the AI understand exactly how to approach the task with authentic expertise

Transform the AI from a generic assistant to a specialized expert

  • Choose a specific professional persona (e.g., senior marketing strategist, forensic data analyst)
  • Outline the persona’s unique background, expertise, and approach
  • Specify years of experience or notable achievements to add credibility
    • Professional background
    • Years of experience
    • Specialized expertise
    • Unique perspective
Specify the target audience
  • Provide detailed information about the recipient of the information
  • Tailoring the context to a specific user helps responses to be calibrated to their understanding and requirements
  • Define demographic details (age, profession, expertise level)
  • Describe the user’s prior knowledge and communication preferences
    • Demographics
    • Prior knowledge
    • Learning preferences
    • Specific needs
Establish tone and communication style

Create a detailed guide for how the AI should communicate

  • Select a specific communication tone (e.g., academic, conversational, mentorship-based)
  • Define language complexity appropriate to the audience
  • Specify preferred metaphors, examples, or explanation styles
  • Outline any subject-specific vocabulary or communication style
  • Tone (formal/casual)
  • Language complexity
  • Cultural considerations25

SHALLOW CONTEXT PROMPT (LESS EFFECTIVE)

DEEP CONTEXT PROMPT 
(MORE EFFECTIVE)

Example Prompt:
“Help me with marketing”

Contextual Limitations:

  • No role specification
  • Undefined audience
  • Vague objective
  • No communication guidelines

Results in generic, unfocused outputs that lack precision and value.

Example Prompt: “You are a senior B2B technology marketing strategist with 15 years of experience in enterprise software marketing. Your audience is mid-level marketing managers at SaaS startups seeking to develop their first comprehensive go-to-market strategy. Communicate in a mentorship tone—professional yet encouraging, breaking down complex concepts into actionable insights. Use real-world tech marketing examples and avoid unnecessary jargon.”


Contextual Strengths:

  • Defined expert role
  • Specific target audience
  • Clear communication approach
  • Detailed expectation setting

Enables highly targeted, nuanced, and valuable outputs.

Example Prompt: “Create a lesson plan”

Contextual Limitations:

  • No educational context
  • Undefined learning objectives
  • Unspecified student demographics
  • No pedagogical approach
  • Produces generic, potentially misaligned educational content.

 

Example Prompt: “Design a science lesson plan for 7th-grade students with varying learning abilities. Focus on inquiry-based learning for a unit on environmental sustainability. The class includes students with mild learning differences, so include multi-modal learning approaches. Use a supportive, growth-mindset tone that encourages curiosity and collaborative learning. Lessons should incorporate hands-on activities, visual aids, and opportunities for student-led investigation.”

Contextual Strengths:

  • Specific educational level
  • Clear learning approach
  • Consideration of student diversity
  • Defined communication style
  • Generates a tailored, inclusive, and engaging learning experience25

Specify what to avoid or exclude in the AI output. Rather than telling the AI what to do, you’re telling the AI what to try and avoid in its responses

Using negative prompting helps define boundaries and refine AI responses by clarifying what should be excluded

  • Helps avoid unwanted content
  • Improves output quality and relevance
  • Provides clearer guardrails for the AI
  • Reduces the likelihood of inappropriate or off-target responses
  • Is particularly effective for refining AI outputs26

Ask the AI to refine or create a prompt for you. Then, provide that prompt to the AI3

Audit regularly

  • Review AI outputs for representational issues
  • Look for patterns of exclusion or stereotyping
  • Consider whose perspectives are centered
  • Check for assumptions about “normal” or “typical”

Recognize different types of bias

  • Representational bias (who is shown/not shown)
  • Allocational bias (how resources are distributed)
  • Quality of service bias (who is served better)
  • Stereotypical bias (reinforcing stereotypes)
  • Historical bias (reflecting historical inequities)27
prompt example more inclusive prompt
Who’s scored the most international soccer goals in history?

Who has scored the most goals in soccer history? Make sure to consider athletes of all genders.

Write a story about a scientist making a groundbreaking discovery.

Write a story about a scientist from an underrepresented group in STEM making a groundbreaking discovery. Consider scientists of various genders, ethnicities, and abilities.

Describe a typical family’s daily routine.

Describe the daily routine of a family. Consider various family structures, cultural backgrounds, and socioeconomic situations.

Describe career options after high school graduation.

Describe a range of options after high school including college, career, and non-traditional paths. Do not provide any value judgment over any of these options27

Prompt type description Example 
Zero-Shot Prompt Give simple and clear instructions without examples. Useful for a quick, general response. “Summarize this article in 5 bullet points.”
Few-Shot Prompt Provide a few examples of what you want the AI to mimic. Helps the model learn your desired structure or tone. “Here are 2 example summaries. Write a third in the same style.”
Instructional Prompt Include direct commands using verbs like "write", "explain", or "compare." “Write an executive summary of this memo. Keep it under 100 words.”
Role-Based Prompt Ask the AI to assume a particular persona or viewpoint. Useful for creativity and domain-specific responses. “You are an MBA professor preparing a lecture outline...”
Contextual Prompt Include relevant background or framing before asking a question. Helps the AI tailor responses to a specific audience or setting. “This text is for an undergrad course on behavioral econ. Rephrase it in simpler language.”22

 

 

References

1.

metaLAB (at) Harvard. Key terms. AI Pedagogy Project. https://aipedagogy.org/guide/key-terms/ 

2.

Stanford University. The Stanford emerging technology review 2025: A report on ten key technologies and their policy implications. Artificial Intelligence. https://setr.stanford.edu/sites/default/files/2025-01/SETR2025_web-240128.pdf 

3.

Carnegie Mellon University. "AI for learning": Integrating artificial intelligence into your teaching. Open Learning Initiative. https://oli.cmu.edu/courses/ai-for-learning/ 

4.

Stryker, C., & Kavlakoglu, E. What is artificial intelligence (AI)? IBM. https://www.ibm.com/think/topics/artificial-intelligence 

5.

Stryker, C., & Scapicchio, M. What is generative AI? IBM. https://www.ibm.com/think/topics/generative-ai 

6.

Mollick, E., & Mollick, L. Student use cases for AI. Harvard Business Impact. https://hbsp.harvard.edu/inspiring-minds/student-use-cases-for-ai 

7.

Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 1-25. https://doi.org/10.48550/arxiv.2305.00280 (Access via FSC)

8.

Chan, C. K. Y., & Hu, W. (2023). Students' voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 1-18. https://doi.org/DOI:10.1186/s41239-023-00411-8 (Access via FSC)

9.

Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learning Environments, 10(1), 1-23. https://doi.org/10.1186/s40561-023-00269-3 (Access via FSC)

10.

Halaweh, M. (2023). ChatGPT in education: Strategies for responsible implementation. Contemporary Educational Technology, 15(2), 1-11. https://doi.org/10.30935/cedtech/13036 (Access via FSC)

11.

SUNY FACT². (2024). FACT² guide to optimizing AI in higher education (2nd ed.). Pressbooks. https://fact2aiv2.pressbooks.sunycreate.cloud/ 

12.

Valova, I., Mladenova, T., & Kanev, G. (2024). Students' perception of ChatGPT usage in education. International Journal of Advanced Computer Science and Applications, 15(1), 466-473. https://doi.org/10.14569/IJACSA.2024.0150143 (Access via FSC)

13.

IBM. (2025). Impact on the environment risk for AI. IBM watsonx. https://www.ibm.com/docs/en/watsonx/saas?topic=atlas-impact-environment 

14.

Elon University & American Association of Colleges and Universities (AAC&U). (2024). A student guide to navigating college in the artificial intelligence era. AAC&U. https://studentguidetoai.org/wp-content/uploads/2024/08/Student-Guide-to-AI-final-081224.pdf 

15.

Stanford Center for Teaching and Learning. AI and your learning: A guide for students. Stanford University. https://ctl.stanford.edu/aimes/ai-learning-guide-students 

16.

Mollick, E., & Mollick, L. (2023). Student use cases for AI: AI as feedback generator. Harvard Business Impact. https://hbsp.harvard.edu/inspiring-minds/ai-as-feedback-generator 

17.

Mollick, E., & Mollick, L. (2023). Student use cases for AI: AI as personal tutor. Harvard Business Impact. https://hbsp.harvard.edu/inspiring-minds/ai-as-personal-tutor 

18.

Mollick, E., & Mollick, L. (2023). Student use cases for AI: AI as team coach. Harvard Business Impact. https://hbsp.harvard.edu/inspiring-minds/ai-as-team-coach 

19.

Stanford University IT. GenAI use cases for experimenting. Stanford University. https://uit.stanford.edu/ai/use-cases 

20.

Stanford University IT. Responsible AI at Stanford. Stanford University. https://uit.stanford.edu/security/responsibleai 

21.

University of Maryland Libraries. (2025). Artificial intelligence (AI) and information literacy: Assess content. University of Maryland. https://lib.guides.umd.edu/c.php?g=1340355&p=9880575 

22.

MIT Teaching and Learning Technologies. Effective prompts for AI: The essentials. MIT Management STS Teaching & Learning Technologies. https://mitsloanedtech.mit.edu/ai/basics/effective-prompts/ 

23.

Prompt Layer. Prompt iteration. https://www.promptlayer.com/glossary/prompt-iteration 

24.

Playlab. (2025). Be clear, concise, and specific. Basic Prompting. https://learn.playlab.ai/prompting/basic/be%20clear%20concise%20and%20specific 

25.

Playlab. (2025). Context is key. Basic Prompting. https://learn.playlab.ai/prompting/basic/context%20is%20key 

26.

Playlab. (2025). Negative prompting. Basic Prompting. https://learn.playlab.ai/prompting/basic/negative%20prompting

27.

Playlab. (2025). Be mindful of bias. Basic Prompting. https://learn.playlab.ai/prompting/basic/be%20mindful%20of%20bias

Last Modified 10/31/25