From listening to doctor-patient conversations and writing up the medical notes, to creating multi-level lesson plans for a high-school class, to offering to make text messages “funnier” – generative artificial intelligence is everywhere and evolving at speed. The modern professional needs to understand the scope and potential pitfalls of genAI to take full advantage of a useful tool that can significantly ease administrative burdens and boost productivity and efficiency, experts say.
AI Her Way consultancy founder Dr Nici Sweaney says “agentic AI” is now on the horizon – autonomous systems that can make and implement their own independent decisions based on the conditions or a set of circumstances.
“Agentic AI can make decisions about workload and it can delegate its own level of work and decide the best path forward to achieve a certain outcome,” she says. “Microsoft 365 has talked about having these agentic AI systems that are super user-friendly. You don’t need any kind of skill-set to set them up. You can just describe the type of equivalent staff you’d like to create, and it will create it for you, and then deploy it, and get all your tech talking to itself and each other.”
With straightforward genAI, a marketing professional might prompt ChatGPT or Google’s Gemini to write five Instagram posts and an email for a particular marketing campaign and provide the genAI model with examples, context, and explicit descriptions of how the work should be done, Sweaney says.
By contrast, Agentic AI would make an autonomous decision about how best to deliver that marketing campaign, and then instruct subordinate AI agents to do the work. “The AI agent can review it, give feedback, learn over time, improve over time, and then, eventually, just send you the delivered product,” Sweaney adds.
Yet these autonomous systems could make decisions that don’t align with important human ethics and morals. “We might solve world hunger and climate change and food security, and that would be wonderful,” Sweaney says, “but we are probably also going to build weapons that are far scarier than anything we’ve ever seen in the past.”
Michelle Dennis, Head of Digital at Haileybury school in Victoria, says it has been difficult for many schools to navigate the deluge of AI programs launched in the past year, with new offerings emerging on an almost-daily basis.
“When ChatGPT first went viral it was a single website that you could manage and block,” she says. “But now AI is at the forefront of Word, there’s AI built into your browser, and AI built into Snapchat that the kids are using every day. Teachers and students are now navigating textbooks with AI built-in.”
The school uses the Microsoft Copilot genAI model and takes care to manage the safety of students and staff personal data. GenAI can be an extremely useful tool for teachers, Dennis adds, “lifting the load” of various types of administrative work from teachers’ shoulders.
A genAI model can build a round robin, making sure that each team plays all its competitors at least once. It can create a roster, including all the variables that affect teachers’ availability. It can help write explanatory memos and notices.
“It can help teachers manage complex workloads, and differentiate lessons for students of many, many different ability levels,” Dennis says. “AI can help us look at the things that are really challenging for teachers to do and most importantly, it can give teachers more time to build relationships with students, to apply their critical thinking and do that expert filtering and curating for their students.”
Haileybury caters for children and adolescents from pre-school up to year 12, and students are introduced to AI from the age of 13 and taught to use the tool “thoughtfully and with purpose, without disengaging from critical thinking,” Dennis says.
Mike McKenna, founder of Australian AI auditing practice Adjust AI, says professionals should always be aware that genAI can “hallucinate” and provide fictional data as an answer to a prompt.
In the US, lawyers have been sanctioned for using genAI to write legal briefs. The tool can include citations to fictional cases, as reportedly happened to lawyers representing MyPillow CEO Mike Lindell, known as a fervid Trump supporter.
“It’s really important to verify the information that you get, particularly in high stakes environments like legal and health,” McKenna says, adding that genAI itself has now been deeply integrated into more common forms of fact-checking.
“If you ask a question of Google, often the first thing that pops up will be a Gemini AI view of the answer, which itself may or may not be correct,” he says, noting that even specialist verification online sources now have AI integrated into them.
“As soon as you step into the genAI world, you run into that risk, and it’s just a risk to manage, like any other – you mitigate it, take care and verify,” he adds, noting that there is a time cost to this extra fact-checking. “There’s a trade-off to be made on how quickly you want to have throughput, let’s say, and how careful you want to be in the accuracy of that throughput.”