Large language models spark AI discussion - tool, or cheat?

It is possible that the massive changes to daily life brought about by the Internet may pale in comparison to the impact of artificial intelligence.

Combined with the continued evolution of robotics, any number of jobs now done by humans could well be better performed by machines, and likely will be.

[Hear the complete KRMG In-Depth Report HERE.]

Those who have had a chance to test ChatGPT or Google Bard, for example, may have seen a glimpse of the potential.

But alongside those developments, work is ongoing as well to provide boundaries and to enforce rules.

Zach Bennett is a Distinguished Machine Learning Scientist with Turnitin, a company which has spent years helping educators worldwide spot plagiarism.

Turnitin recently added the capacity to spot machine-generated text, he says, and has added that capability to its “similarity report” which educators have used for years.

“In that same tool, there will now be an indicator that says ‘our detector found some text that looks like AI writing,’” Bennett told KRMG, “and they’ll be able to click through and look at which portions of the text we’ve identified,” he says.

The idea is not to eliminate the use of large language models, nor to accuse anyone of cheating - but to at least draw attention to passages which may be questionable, and spark discussion of what constitutes the line between using them as tools, and simply cheating.

“I think when it comes to, you know, what a student’s turning in, it really has to be their authentic voice,” he added. “It has to be what they intended to say, right? It’s up to them to go through all of the text and make sure they really wanted to say it, they want to say it this way, they understand all the material, rather than passing off something that was automatically generated as something that they produced.”