Skip to Main Content

Using AI to Support Your Research

Evaluating Sources Found Through an AI Search

Just because information comes from AI doesn't mean that it can be trusted (and we have lots of reasons not to). 

  • Consider the source of the information: What dataset has the AI been trained on? Can claims be verified across multiple credible sources?
  • Consider the systematic bias: What dataset has the AI been trained on? What bias exists in the algorithm? Is the output missing different perspectives or points of view?
  • Where are the sources coming from: Are the sources from the internet? Are the sources from a database or other defined set?
  • If the AI tool provides sources for its output, cross check those sources to ensure they are not hallucinations.

Evaluating sources and determining if they meet your information need is an important part of being information literate. Remember, you're evaluating for credibility and usefulness.

Google's AI Overviews

When you do a Google search, Google will sometimes provide an AI Overview at the top of the search results. This is a summary of multiple search results and is comprised of numerous individual sources.

The AI Overview by itself is not a source and should not be cited as such.

To evaluate information found in a Google AI Overview, click on the link icon within the overview to expand the information. It will show you a list of the individual sources that were summarized to create the AI Overview. 

Evaluate each source carefully for credibility and usefulness. Generative AI (used to create AI summaries) has been known "hallucinate" - or provide incorrect or nonexistent content in response to a query. Hallucinated content can include made-up "facts," citations, code, and more.

The articles listed in the Related Resources box on the left provide more information on AI hallucinations.