When it comes to balancing innovation and regulation for artificial intelligence, the input of engineers is invaluable.
The recent acceleration in artificial intelligence (AI) capability has stirred the imagination of the public, but also increased fears of what implications this seemingly revolutionary technology might have.
As pressure grows for regulators to place safeguards on the uses of AI, Engineers Australia has heeded the Australian government’s call for submissions on how to support responsible uses of these systems.
The government has also sought advice on how generative AI — programs such as the large language model ChatGPT — might shape the future of education.
“AI has the potential for significant transformation in the future of engineering in ways we are just starting to understand,” said Damian Ogden, Engineers Australia’s Group Executive for Policy and Public Affairs.
“It will alter how engineering is taught and assessed. In the workplace, it can optimise design processes, improve modelling and help extract more meaningful insights from data. This will lead to greater productivity, freeing up engineers to be more innovative.”
Benefits and risks
Engineers Australia’s Information, Telecommunications and Electronics Engineering (ITEE) College, contributed to the Engineers Australia submissions.
ITEE Chair Peter Stepien told create that while AI has been around for a long time, it has evolved.
“It has reached a stage where it is a lot more versatile than it used to be,” he said. “We are using a technology that is just another tool in our toolbox that we can use to design and build. So in that respect, we have to manage the risks that are associated with it.”
Stepien said it was important to embrace the benefits that AI can deliver.
“But, of course, in everything that engineers do, we always complete a risk analysis — and this is a technical and safety risk analysis, to ensure that whatever we were designing is going to be safe,” he said.
“We must not build something that’s unsafe, and AI would still fit into that framework.
“However, I think the government wants to make a special case of AI, given that it’s so different to what we would normally consider as being a traditional engineered approach to a problem, which is more deterministic.
“While AI is, in principle, deterministic, it still lends itself to providing a wide variety of responses for a given input. That makes it a little bit different.”
That, Stepien said, explains why governments are approaching the technology cautiously.
“And I think rightly so,” he added. “They want to ensure that the risks they can mitigate by legislation [are addressed]. But at the same time, I don’t think they want to hinder creativity in this space.”
And this is playing out on an international scale too: governments want to ensure that they are not hindering their own AI development and allowing competing nations to get ahead of them.
That means, Ogden said, properly deploying regulation to reduce the risk of bias and misinformation. In that regard, he has found that Engineers Australia’s members’ views align well with what the government has been saying.
“We have been speaking to many members with strong experience in this field, and our perspective on AI aligns quite well with what we are hearing from government,” he said.
“We are advocating for a balanced approach — regulatory and non-regulatory measures — to harness AI’s benefits while safeguarding professionals, educators, students and the community.”
“The approach must prioritise regulation for AI systems with high-risk implications, ensuring public protection while maximising the benefits of these systems and ensuring Australia can develop an internationally competitive AI industry.”
Critical systems
Stepien said an important place on which to focus AI regulation was in its use in critical systems, where failure would have a significantly detrimental effect.
“We don’t want to have the government regulate and hinder the use of AI in places where it can be used safely,” he said. “But we want to make sure that in places where it can cause a hazard, the government does have some regulation.
“And this comes down to mainly critical systems. We need to have some verification that a particular design is working correctly.
“Some governments around the world, that’s what they’re doing. They’re saying, here is the risk associated with the use of AI, and depending upon where it sits, they either regulate that area or they don’t regulate.”
Ogden described input from engineers as “critical” to the AI debate.
“AI will not only impact the profession, but it is engineers who are developing these technologies and it will be engineers who integrate them into current and future systems,” he said.
“The more diverse perspectives we hear from, the greater chance we have at shaping policy which is fit-for-purpose and supports AI in the future.”