Let’s be real, figuring out what you can and can’t do with AI at university is confusing. One lecturer says it’s okay to use for some things. Another threatens disciplinary action if you even think about ChatGPT.

Hi, if you don’t know me, I’m Dr Theresa Orr.
You’re welcome to learn more about how I help students succeed at university with my Uni Pro Accelerator Course.
In general, AI can be used to brainstorm ideas, and explain complicated university jargon or assignment instructions, but it starts to cross the line into cheating when you use it to source information or write your assessments.
So where exactly is the line between smart help and plagiarism or cheating? The truth is, it depends on what you’re using AI for, and your university’s policy (which can change between subjects). But to make it easier, I’ve broken it down into three categories: ✅ things that are definitely okay, ⚠️ things in the grey zone, and 🚫 things that will get you flagged for plagiarism or academic misconduct.
You can also watch my video on using AI at University here:
What You Can Use AI For
AI is an amazing thinking tool, the best way I can explain it is to use AI like you’d use a study buddy. Talk to chatbots, bounce ideas off them and ask if they understand something that you don’t (yet) and if they can explain it to you.
Here are things that are 100% okay at most universities (I say ‘most’ because there are always some people out there that think a zero-AI policy is the way to go, but let’s be honest they’re dinosaurs that are just ignoring technology at this point):
1. Brainstorming Ideas
One of the best ways to use AI tools like ChatGPT at uni is for brainstorming. If you’ve been given an open-ended question and don’t know where to start, AI can help you come up with possible angles, key points, or even real-world examples to explore.
Think of it like bouncing ideas off a friend, only this one is available 24/7 and won’t get distracted halfway through. It’s a great way to:
- narrow down a topic
- look at different ways to approach a question (or come up with different angles)
- kickstart your thinking when you’re feeling stuck
Example:
You’re asked to write an essay on “The impact of social media on mental health.” You could ask an AI chatbot:
“What are some different angles I could take for an essay on this topic?”
AI might suggest things like:
- Focusing on teenagers vs adults
- Looking at specific platforms (e.g. TikTok, Instagram)
- Exploring the role of algorithms
- Comparing the effects of active vs passive use
From there, you can decide which idea or path interests you most, and start planning your essay with a clearer direction.
Just remember: AI doesn’t know your specific assignment. You still need to double-check that your approach fits the assessment your lecturer whats you to complete.
2. Understanding Assignment Instructions
Let’s be honest, assignment instructions aren’t always easy to follow. I often found them confusing or not super clear so that I never felt like I knew exactly what I needed to do for an assessment. Uni-speak like “critically analyse,” “evaluate,” or “discuss with reference to scholarly sources” can seriously leave you staring at the page wondering what you’re actually meant to do (at least it did for me).
AI tools like ChatGPT can help translate those confusing instructions into everyday language so you know exactly what’s being asked, and then you can get started writing faster.
It’s especially helpful if:
- your assignment brief is vague or full of technical jargon
- you’re not sure what level of detail is expected
- English isn’t your first language
- you just want a second opinion before getting started
Example:
Your instructions say:
“Critically examine the impact of globalisation on small business operations.”
You can paste that into ChatGPT and ask:
“Can you explain what this sentence means in plain English, with an example?”
It might respond with something like:
“You need to look at both the good and bad ways that globalisation affects small businesses. Globalisation means that the world is more connected—businesses can trade, sell, and communicate more easily with people in other countries. For example, a small business might get cheaper materials from overseas or sell to international customers. But they might also face more competition from large global companies.”
This kind of breakdown helps you actually understand what the question is asking, so you’re not guessing and only finding out later when you get your marks back that you didn’t actually understand what your lecturer wanted. The other bonus is that once you get used to what these uni terms mean, you’ll be much quicker at figuring out what your lecturer wants in future assignments.
The more you practise decoding assignment language, the easier it is I promise.
3. Explaining Academic Terms Or Concepts
Journal articles are full of academic (i.e. confusing and wordy) terms that no one ever really explains, but you’re expected to not only understand them, but also to use them in assignments. Whether it’s systematic reviews, epistemology, inductive reasoning, or discourse analysis, academic language can seriously make it hard to get started, let alone finishing an assignment on time and getting a good mark.
This is where AI can really help.
You can copy and paste in a term or a whole paragraph from a reading, journal article or lecture and ask AI to explain it like you’re five, or like you’re a tired uni student who just wants a straight answer.
It’s especially useful for:
- making sense of tricky readings
- understanding feedback from lecturers
- checking you’ve got the right meaning before using a term in your work
Example:
You’re reading a journal article and keep seeing the phrase “This study used a systematic review methodology…”
Not sure what that means? Try this prompt in ChatGPT:
“Can you explain what a systematic review is, in simple language, with an example of how it works?”
You might get something like:
A systematic review is a formal method for collecting and summarising all the research on a specific question. Researchers follow a strict, step-by-step process to find studies, decide which ones to include, and then analyse them together.
By comparison, a regular literature review is usually more flexible. It gives an overview of key studies, but the writer chooses what to include based on their own judgement, not a strict process.
Example: Imagine you’re researching whether digital learning improves student outcomes. A systematic review would search multiple databases using set keywords, include only certain types of studies, and assess each study’s quality. A regular literature review might just summarise the five or six most relevant papers the author found.
This kind of breakdown helps you understand concepts, making it easier to follow what you’re reading and to use terms correctly in your own work.
4. Study Questions Or Prompts
AI is a real game changer when it comes to studying. It makes active recall study (the best study method – read more on it here) easy, so you’re not just passively reading your notes or highlighting a textbook and wasting your time.
You can use it to generate:
- generate your own practice questions after a lecture
- test yourself on key ideas and terms
- reframe notes into flashcard-style Q&As
- prepare for exam topics using different question types (short answer, multiple choice, etc.)
It’s like having a study partner who throws you questions to help you check what you understand and what you need to go over again.
Example:
You’ve just finished revising a topic on climate change impacts for your environmental science class.
Give ChatGPT your lecture slides and notes and try prompting with:
“Can you give me five short-answer practice questions on climate change impacts (based on the material provided), aimed at a first-year uni student?”
You might get something like:
- What is the difference between climate variability and climate change?
- Name three major consequences of rising global temperatures.
- How does climate change affect ocean currents?
- Explain how climate change can increase the risk of natural disasters.
- What are some social or economic impacts of climate change in Australia?
You could also take it a step further and ask:
“Can you turn those questions into a mini quiz with answers included?”
or
“Can you create a multiple-choice version of those questions?”
This helps you actively recall what you know and gives you something way more useful than just re-reading notes (I don’t know about you, but I always feel better when my study session is productive rather than trying to stay awake while reading endless notes).
The Grey Areas Of AI (AKA: Use With Caution)
This is where things get murky with AI at university. The generative tools are helpful, but how you use them or how much you use them matters.
1. Asking AI To Find Sources For You
Getting chatbots to find sources of information (e.g., journal articles) sounds like a good idea, but AI is actually completely unreliable for this. It can make up citations and information (a phenomenon called hallucination), or pull from dodgy websites, and yes I’m talking about ChatGPT, Bard Copilot, Gemini etc, it doesn’t matter if they can search the web in real time, they’re all as bad as each other.
In fact, these AI tools have a track record of about 67% accuracy (at best) when it comes to getting sources correct. If you’re asking a generative AI chatbot to give you references or journal articles, you need to be smart about it:
- Always ask for URLs or DOIs so you can check them yourself.
- Limit searches to trusted databases like Google Scholar, PubMed, or JSTOR.
- Never copy and paste references directly into your bibliography without checking every detail.
Some universities or lecturers might be fine with you using AI to suggest places to look, while others see it as outsourcing the research process, which, let’s be honest, it kind of is.
My recommendation is to always check your assignment brief, subject outline, or ask your tutor about your subject and specific assessments, and whether using AI to source information is okay or not. I’ve been researching for years, so for me it’s actually faster to do my own research rather than using AI. I know what I’m looking for and how to find it quickly, which is a better way to spend my time rather than trying to wrestle with ChatGPT’s false information and getting angry with it (yes I know it’s silly to get mad at tech…). But researching takes practice, and that’s something you will never get if you always use AI (just something to think about).

Example:
You’re writing an essay on the effects of screen time on sleep in teenagers. You ask ChatGPT:
“Can you give me five peer-reviewed journal articles on how screen time affects adolescent sleep?”
It replies with a neat list of article titles, authors, and journals. Looks perfect and helpful until you try to find the articles and discover that two of them don’t exist, one is misattributed, and one links to a random blog post.
That’s why you must double-check everything, which is when it can start taking longer than if you did the research yourself.
ScholarGPT is slightly better than some other AIs, and one way you can use it more effectively for this task is to get it to give you a list of authors that are authorities on the topic you are studying/researching/working on. That way you can just look up those authors on Google Scholar and you’ll immediately have access to all of their work and you can get started on your assignment faster.
2. Getting AI To Rewrite Your Work
Using something like Grammarly to check spelling, grammar, or sentence clarity is usually fine. Most universities treat these tools like spellcheckers, a way to polish your work, not replace it. It’s no different to using the Editor in Microsoft Word for spelling and grammar.
But when you go that step further and hit that tempting “Rewrite for me” button, the game changes.
Even if you wrote the original text yourself, if AI is doing the rewriting for you, it’s no longer entirely your work. The line between helpful editing and ghostwriting gets blurry real fast. In terms of university (or academia) this is basically bordering on plagiarism.
This matters because:
- The rewritten version might no longer reflect your own voice
- You might not fully understand or be able to explain the writing
- Some tools (like Grammarly or Quillbot) can completely restructure your paragraph, which can breach assessment policies
Example:
You write:
“The experiment’s results were inconclusive, which may have been due to sample size limitations.”
Then you paste it into an AI tool and hit “Rewrite.” It gives you:
“Due to a small sample size, the experiment failed to produce conclusive results.”
It might sound better (or not). But now the sentence structure, flow, and tone aren’t yours, and if you do that with every paragraph, it will set off red flags for your marker.
It also creates a bigger issue. If you can’t explain what you wrote or why you said it that way (especially in presentations, exams or future assessments), it becomes clear you didn’t fully understand or write it yourself.
The safest approach is to use tools that highlight suggestions and give you the option to improve your writing, rather than rewriting it for you. Then you can make the changes you want to make, and have it still sound like you.
3. Using AI Tools In Coding Or Technical Assessments
I’ll be the first to admit that I use Chat to help me write code for my website. But that’s for personal use, and helping me in the same way that hiring a person to code it would be (I’m not expected to know how to do everything!).
But while using AI tools like ChatGPT, Codex, or GitHub Copilot feels like the smart and fast way to get your coding assessments done the tasks you get them to do can either be ‘okay’ or cross the line into red flag territory 🚩. If AI is doing the logic, structure, and syntax for you, that can quickly cross the line into academic misconduct. This matters because:
- You’re submitting work that may not reflect your actual understanding
- If asked to explain it, you might not be able to
- It undermines the whole point of the assignment, which is to develop problem-solving skills you’ll need in the real world (and in your real job that’s paying you $$$ for your skills)
Example:
Let’s say your assignment is to write a function that filters and sorts user data in Python. You paste the assigned question into ChatGPT and get a fully functional script.
Sounds great until your tutor asks you to explain how the list comprehension works… and you can’t. Even if the code is technically correct, if you didn’t write it or can’t explain it, you’re likely to get questioned, or worse. So if you’re using AI:
- Use them to understand how a concept works (e.g., syntax or formatting)
- Ask them to explain sections of code you’re struggling with
- Don’t use them to write your entire submission
Helpful (and ethical) prompts to use:
“Can you explain what this line of code does?”
“What does this error message mean, and how can I fix it?”
“Can you show me an example of a for loop that filters a list of numbers in Python?”
“What’s the difference between a list and a dictionary in JavaScript?”
I know it’s annoying, but try not to get AI to write your code, especially if you plan on using code in your career, you’ll want to build those skills while you’ve still got the safety net of uni.
4. Not Checking Assignment Rules
This is one of the easiest ways to get into trouble. Some assignments or classes will clearly state “AI use is not permitted” or something similar like, this one:

Others might say it’s okay for brainstorming but not for writing. And some won’t mention AI at all, leaving you to figure it out. That’s the frustrating part. There’s no consistent rule across universities, or even within the same degree. One subject might encourage AI use, while another will treat it as academic misconduct.
So here’s what you can do:
- When in doubt, play it safe. If you’re unsure, only use AI for idea generation, clarification, or spell checking, not writing or sourcing.
- Read your assignment instructions carefully. Look for anything that mentions AI use, originality, or authorship.
- Check your subject outline or course guide. Some now include an official section on generative AI.
- Ask your lecturer or tutor directly. It’s better to be clear upfront than assume something’s okay and get flagged later.
Example
You’re doing two subjects this semester. In your Psychology unit, the lecturer encourages using AI to help brainstorm arguments, as long as it’s your own writing. But in your Criminology unit, the instructions say:
“You must not use ChatGPT or any generative AI tools for this assignment. Doing so will be considered a breach of academic integrity.”
If you don’t catch that small line in the Criminology task sheet, and you use ChatGPT even just to help you reword a paragraph then you’re at risk of being reported for misconduct.
Even worse, your university might consider it intentional misuse if the rules were clearly stated and you didn’t read them.
Unfortunately the onus is on you to double-check every class and every assessment. It’s annoying, but it’s the only way to stay safe while AI rules are still evolving.
What You Shouldn’t Use AI For
If you use AI for any of the following in university, you’re risking plagiarism, academic penalties (like reduced grades, or even getting kicked out), and worse, missing out on the learning component (which is actually important when it comes to progressing your career after uni):
- Asking AI to write full paragraphs or entire assessments
- Feeding in dot-points and asking for “a good version”
- Copying and pasting AI-generated summaries, explanations, or citations
- Submitting answers you don’t understand or couldn’t explain
- Getting it to rewrite your thoughts so heavily they’re no longer yours
Using AI-generated Text Or Code In Your Assignments
This one’s simple: if you copy something written by an AI tool and submit it as your own work, that’s plagiarism.
Even if you change some words or tweak the structure. Trust me, as a marker, we can usually tell. Even if AI-generated writing “sounds academic,” it often lacks depth, context, or relevance (we call it fluff). If the original wording came from ChatGPT, Copilot (or any other AI tool), and you don’t clearly cite it, you’re essentially lying about who wrote it. That’s a textbook definition of plagiarism. It’s the same as sharing content on social media, you always need to give credit to who came up with the content.
BUT here’s the catch: even if you do cite it, many universities still don’t allow AI-written content in assessments. So even being honest about it won’t necessarily save you.
Why it’s a problem:
- It misrepresents the work as your own original thinking
- It bypasses the learning process your assignment was designed to develop
- AI content can be vague, generic, or incorrect, and you’ll still be responsible for it (I added some to this post – see if you can find it… it’s pretty obvious!)
- Most universities consider using generative-AI for writing any of your assessment as academic misconduct unless explicitly allowed
Example:
You have to write a reflection for a health science assignment and feel stuck. So you write out a rough draft. It’s clunky, but your ideas are there. You paste it into ChatGPT and ask:
“Can you rewrite this to sound more academic and polished?”
It spits out a perfect paragraph. You submit that version.
Even though the ideas were yours, the final writing wasn’t. It’s still plagiarism, and your lecturer’s will probably pick it up when they put it through Turnitin.
Citing AI tools
You can cite generative AI tools using APA or MLA (e.g. “OpenAI, 2023”), but that doesn’t mean your uni allows it. You still have to check your assignment brief or ask your tutor.
For example, in APA (7th edition) your citation would be (OpenAI, 2023) or according to OpenAI (2023), and the reference would be:
Reference list:
OpenAI. (2023, March 14). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
Unless you’ve been told it’s okay to include AI-generated text, the safest option is don’t copy and paste at all. Use AI to think, chat and help, not to write.
| ✅ Okay | ⚠️ Grey Area | 🚫 Not Okay |
|---|---|---|
| Brainstorming | Asking for sources | Copy/pasting AI-generated text |
| Explaining terms | Heavy rewriting/paraphrasing | Getting AI to write for you |
| Clarifying tasks | Coding help (depends on use) | Submitting answers you didn’t write or understand |
| Study questions | Not checking your unit’s AI rules | Using fake AI citations |
The real cost
Using generative-AI to write or create your assessments isn’t just risking misconduct penalties. You’re also missing the chance to build essential skills like writing, critical thinking, and analysis. Those are the exact skills employers expect when you graduate, and AI won’t always be allowed in the workplace, especially in government or confidential roles because of privacy laws.
The best way to use AI is like a friend or tutor, not a ghostwriter. If you’re not doing the thinking, you’re not doing the learning (and yes, I know that makes me sound like an old fogey, but I’m not wrong).
Here’s some free tools you should be using though.
