Skip to content
Home » Blog » How Lecturers Know You Used ChatGPT

How Lecturers Know You Used ChatGPT

I tested ChatGPT 3.5 and 4 (edit: and now version 5) more than 20 times, and not once could it get a single reference correct. As a lecturer, I can tell you that references are the number one way we know you’ve used ChatGPT to write your assignment.

The same applies to Bard and Copilot (and others). Some of these have a higher success rate with their references, but the best I ever got was from Bard with 65% of citations being correct. The rest were plain made up.

Sure, there are the more obvious AI giveaways as well, like filler content, fluff words, or over-the-top metaphors. But it’s your references that expose you.

You can also watch my video on how lecturers know you’ve used ChatGPT here:

Why References Are a Dead Giveaway of AI use

Lecturers and professors are experts in their fields. Believe it or not, but we actually know all of the key authors, publication years, journals, and even the article titles of the research in our area. So when ChatGPT invents a reference, or worse (invents a realistic-looking URL that leads to a 404) it stands out instantly.

And here’s the truth: we know a student isn’t going to go to the trouble of making up a reference. That means it came from ChatGPT (or other AI).

When we see citations in your essay but the reference list doesn’t match reality, that’s a red flag for plagiarism or academic misconduct. In fact, sometimes we don’t even need to go to the reference list, we can tell from the citations alone, because we will know that a specific author didn’t say that, or it will be an author we have never heard of (unlikely in our area of research).

How ChatGPT Messes Up References

When asked to create references (or to cite its sources), ChatGPT (and Bard, Copilot etc) often:

  • Mix details: combines a real author’s name with the wrong title or journal. Or takes a real title and real authors but from different papers and smooshes them together.
  • Invent URLs: gives you a link that looks right but goes nowhere.
  • Fabricate sources: creates a reference that sounds credible but doesn’t exist (and it never did…)

The reason ChatGPT (and other AI tools) mess up citations is simple: they’re language models, not databases. Instead of pulling from a verified library of references, they predict what “looks like” a correct reference based on patterns in the text they were trained on.

That means:

  • If the model has “seen” similar words together during training, it might stitch them into something that looks like a real citation, even if it doesn’t exist.
  • It doesn’t “know” whether an article was ever published in a journal; it just knows that author names, years, and journal titles usually appear together.

Different AIs fail in slightly different ways:

  • ChatGPT is the worst for “hallucinated” references: full citations that are completely made up, but formatted perfectly.
  • Copilot (Microsoft) often pulls in real-sounding titles from Microsoft’s search integrations, but it can still attach them to the wrong author or journal.
  • Bard (Google) has the advantage of web access, so it gets it more right than others, but even Bard still gets things wrong, in my tests, around 35% of references were either broken links or mismatched details.

So even if a reference looks correct, if you want to avoid getting caught using AI at university you need to check it yourself on Google Scholar, your university library, or the original journal database (which can often take longer than if you did the research yourself first time around).

Ultimately, I would never rely on ChatGPT to create a reference list. Or to do my research for me. I use Google Scholar to speed up my research instead (learn how to do that here).

Other AI Giveaways Lecturers Notice

References aren’t the only way lecturers can tell when an assessment has been written (or heavily helped) by AI. Some other red flags include:

  • Overuse of transition words: Essays (or reports etc) that constantly use “furthermore,” “moreover,” or “additionally” feel unnatural. Students rarely write this way.
  • Generic, polished-but-empty tone: AI often produces sentences that sound smooth but don’t actually say much. Real student writing usually has less fluff and more rough edges, but also more genuine insight (yes, including you).
  • Missing discipline-specific language: Every field has its own jargon. Psychology students talk about “construct validity,” engineering students about “stress–strain curves.” When that language is missing, it stands out.
  • Too evenly structured: AI often makes every paragraph the same length and rhythm, which is rare in real student work. Even if you edit it later, it’s still pretty obvious.

What Turnitin or SafeAssign Actually Flag

Tools like Turnitin or SafeAssign don’t directly say “this was written by AI.” Instead, they look for patterns such as:

  • Unusual phrasing or repetition: AI often reuses certain sentence structures, which can make your writing stand out as “suspicious.”
  • Low similarity scores but strange style: If your assessment has almost no overlap with existing sources (0–5% similarity) but reads like it came from nowhere, lecturers may take a closer look.
  • Inconsistent voice: If part of your essay is very polished (AI-like) and another part looks more like your natural writing (emails, past assignments), that contrast is noticeable.
  • Lack of citations: This one falls under ChatGPT plagiarism too. Remember if you didn’t write it, you need to cite it.

The software doesn’t make the decision, your lecturer does. They’ll use the report as one piece of evidence, but not for the whole case.

Universities Do Allow ChatGPT (But With Limits)

Don’t assume your lecturers are “old school” and don’t understand AI. Most researchers use ChatGPT too. In fact, when we submit journal articles, publishers now ask us how much we used it.

Universities usually allow ChatGPT, but not for generating your assignments. It’s a tool, like Microsoft Word or Google Scholar or, years ago, Encyclopedia Britannica.

Here’s how you can safely use it:

For brainstorming ideas: to break down a tricky question or suggest angles you hadn’t thought of.
For pointing you towards research areas: so you know what keywords or topics to look up.
For checking readability: especially if English isn’t your first language, it can polish your writing (just like Word can).

Not for writing your essay or reference list. That’s where you’ll get caught.

For a complete breakdown of what you can and can’t use AI for at university read this guide

The Real Risk of Misusing ChatGPT

Yes, plagiarism checks and Turnitin flags are a risk. But the bigger problem for you is that your time at university is about building skills.

Assignments are supposed to teach you how to:

  • Find information quickly.
  • Analyse and interpret it.
  • Add your own opinion or argument.
  • Communicate that clearly in writing (or public speaking).

If ChatGPT does all of that for you, you miss the point of university. Later, when you’re in the workforce, you won’t have those skills, and that’s what will hold you back in your career. It will be harder to get promotions and potentially even to keep your job. You also can’t assume that you’ll just be able to use AI when you’re at work. AI chatbots are often blocked on browsers due to privacy or security concerns, and while some businesses or even government departments have their own AI, they will never be as powerful as your brain at being able to determine what’s real or not, and then break it down and communicate it to your colleagues.

How to Use ChatGPT the Right Way

Think of ChatGPT as a supportive study tool, not a substitute (or hack) for your own work.

  • Use it to clarify assignment instructions if they’re confusing.
  • Ask it to summarise complex academic jargon so you can understand it better.
  • Get it to suggest structure ideas for your essay, then fill in the content yourself.
  • Run your draft through it for grammar and readability feedback.

That way, you’re still doing the learning, but you’re also using the technology to your advantage.

Leave a Reply