Cambridge Annual Report: Generative AI

#cambridge#fully#review

Cambridge released its annual report recently, and there was a section on Generative AI which is obviously top of mind for almost everyone in the industry right now. It's refreshing to see organizations publishing their principles around the topic, especially publishers / administrators as FUD around AI continues to grow in the language learning space.

Here's what Cambridge is saying:

  • People are the priority. AI should support teachers and learners, not replace them.
  • Accuracy and trust matter. Any AI tools need strong human-created content behind them.
  • Authors have control. Their work can only be used if they agree, showing respect for intellectual property.
  • Piracy is off limits. Using stolen content for AI training is unethical and actively opposed.
  • AI can make teaching and assessment easier, but should never compromise fairness or integrity.

Of course stated principles are different than execution, but just putting the stake in the ground is important. It's almost a guarantee that they're exploring avenues to deeply integrate AI throughout the testing process as a response to their competitors like DET and PET, but there's a big difference in fully automated review versus human- in-the-loop review versus fully human review.

Human only is likely to go away completely within a few years, with a big debate looming on human-in-the-loop versus fully automated.