AI and A level Physics

AI and Learning A level Physics

January 10, 20265 min read

AI and A-Level Physics: Powerful Tool, Real Limits

Artificial intelligence is moving fast. AI tools are already transforming how students revise, how teachers prepare resources, and how explanations can be generated on demand. I actively use AI in my own business, and I expect it to become even more powerful and more useful over time.

However, when it comes to A-level Physics, there are some very important limits that students, parents, and teachers need to understand. Over the past year, I’ve encountered several situations — both directly and indirectly — that highlight why AI must be used carefully and critically, particularly when mathematics and physical reasoning are involved.

What follows are three real examples.


1. When the Method Is Right — but the Maths Is Wrong

A student sent me a difficult physics problem involving three particles arranged in an equilateral triangle, with the forces between them modelled as springs.

Difficult springs question

Click on the image to see a larger version

Solving the problem required substituting forces into one another and making a first-order approximation — specifically, assuming that the stretched length of the spring was approximately equal to its natural length.

That approximation was stated in the question, but it was easy to overlook. I initially missed it. The student couldn’t solve it either. I passed the question to a colleague, who worked on it with an engineer — and they also struggled.

Eventually, they asked an AI tool for help. It produced an answer almost instantly. At first glance, it looked convincing.

But it was wrong.

When asked how it had solved the problem, the AI gave a clear explanation of the correct method, including the approximation that all of us had initially missed. That explanation allowed me to spot my mistake, apply the approximation properly, and redo the calculation myself.

That’s when I discovered something important: although the method was sound, the numerical working was incorrect.

This highlights a crucial point. AI language models can explain how to approach a physics problem, but they do not reliably perform mathematical calculations. Physics is applied mathematics. If the numbers are wrong, the physics is wrong — even if the explanation sounds plausible.

Click here to see the difficult question and worked solution


2. AI Marking: When Confidence Masks Hallucination

I’ve also experimented with AI as a potential support tool for marking student work. The challenge here isn’t just reading handwriting — that is improving — but understanding connections: linked equations, fractions, reasoning across multiple lines of working.

In one attempt, I provided:

  • the student’s script,

  • the exam question,

  • the mark scheme,

  • and a model solution.

The AI responded confidently, appearing to apply the mark scheme and offer advisory feedback — exactly what I had asked for. But on closer inspection, it wasn’t doing this reliably.

In one case, involving a particle physics question, the AI made statements about calculations and physical behaviour where the physics was wrong.

I tried again using a different strategy: first asking the AI to analyse the question alone. This time, the analysis was correct — but it turned out the student had answered that question correctly anyway. Students learn far more from feedback on what they get wrong.

So I asked the AI to analyse another question from the same script — one the student hadn’t done well. At that point, the system didn’t just make small errors. It hallucinated a totally fictitious question, the 'student’s' answer, the 'mark scheme', and the topic was not in the A level specification!

The lesson here is clear: you cannot rely on a language model to mark A-level Physics work or to generate trustworthy feedback simply because it sounds confident — even if you provide it with the mark scheme.


3. AI-Generated Practice Questions That Look Right — but Aren’t

The third example didn’t involve me using AI at all, but one of my students.

She had found an AI website that allowed her to upload an A-level Physics question and then generate similar questions for practice. She brought one of these questions to a one-to-one session and asked for help when she could not do it.

At first glance, it looked like a perfectly legitimate A-level question. It involved a metal rod carrying a current in a magnetic field, moving along rails, with a constant force causing acceleration.

But as we worked through it, something didn’t add up.

According to Fleming’s left-hand rule, the force on the rod would act in the opposite direction to the one required for the motion described in the question. That meant the rod could not behave as stated. The question was fundamentally flawed.

The structure looked familiar. The wording looked right. But the physics was wrong.

When I asked where the question had come from, she explained it had been generated by AI. She had assumed — understandably — that because a computer produced it, it must be correct.

This is a dangerous assumption!

Language models generate text by predicting what is likely to come next - so it looks like a physics question. The AI does not actually understand the physics itself. As a result, AIs can create questions — and mark schemes — that are internally inconsistent or physically impossible.


A Measured Conclusion

None of this is an argument against AI.

AI is already a powerful tool, and it will get better. I’m aware that specialist systems are being developed with curated physics content, and I expect those to become far more reliable at dealing with Physics content in time.

But right now, general language models should not be trusted to:

  • perform physics calculations,

  • mark A-level Physics work,

  • generate practice questions,

  • or give definitive answers to exam-style problems.

Used carefully, AI can help with explanations, organisation, and idea generation. Used uncritically, it can mislead — especially in a subject where correctness matters.

The key skill for students is not avoiding AI, but learning how to question it, verify it, and never outsource physical reasoning.

That judgment still belongs to humans.

Dr Alison Camacho is the founder and owner of 42tutoring Limited.

She is a very experienced teacher (>24 years) of A level Physics and Science at GCSE.
She is a member of the Institute of Physics and the Association for Science Education.

Dr Alison Camacho

Dr Alison Camacho is the founder and owner of 42tutoring Limited. She is a very experienced teacher (>24 years) of A level Physics and Science at GCSE. She is a member of the Institute of Physics and the Association for Science Education.

LinkedIn logo icon
Instagram logo icon
Youtube logo icon
Back to Blog