CREATE
  • Technology
    • BIOTECH
    • COMMUNICATIONS
    • COMPUTING
    • IMAGING
    • MATERIALS
    • ROBOTICS
    • SOFTWARE
  • Industry
    • DEFENCE
    • INFRASTRUCTURE
    • INNOVATION
    • MANUFACTURING
    • POLICY
    • PROJECTS
    • TRANSPORT
  • Sustainability
    • ENERGY
    • ENVIRONMENT
    • RESOURCES
  • Community
    • CULTURE
    • PEOPLE
  • Career
    • EDUCATION
    • INSPIRATION
    • LEADERSHIP
    • TRENDS
  • About
    • CONTACT
    • SUBSCRIBE
No Result
View All Result
CREATE
  • Technology
    • BIOTECH
    • COMMUNICATIONS
    • COMPUTING
    • IMAGING
    • MATERIALS
    • ROBOTICS
    • SOFTWARE
  • Industry
    • DEFENCE
    • INFRASTRUCTURE
    • INNOVATION
    • MANUFACTURING
    • POLICY
    • PROJECTS
    • TRANSPORT
  • Sustainability
    • ENERGY
    • ENVIRONMENT
    • RESOURCES
  • Community
    • CULTURE
    • PEOPLE
  • Career
    • EDUCATION
    • INSPIRATION
    • LEADERSHIP
    • TRENDS
  • About
    • CONTACT
    • SUBSCRIBE
No Result
View All Result
CREATE
No Result
View All Result
Home Industry Innovation

The mind behind the screen: How the rise of AI is changing engineering innovation

Rosalyn Page by Rosalyn Page
9 May 2024
in Innovation, Features
7 min read
0
The mind behind the screen: How the rise of AI is changing engineering innovation

"It does the same thing every day, it just does it better than the way it did it previously.” Image credit: Getty

Tools derived from AI and machine learning are all around us, but what engineering principles lie behind the creation of this technology?

Behind the technology driving artificial intelligence (AI) lies human ingenuity and effort. These innovations have taken artificial intelligence from the realm of science fiction to applications we use every day.

According to Dr Dhani Dharmaprani, Future Making Fellow with the Australian Institute for Machine Learning (AIML) at the University of Adelaide, AI has the potential to help tackle some of humanity’s most pressing challenges, such as climate change, improvements to health care and health equity, and education. 

Dr Dhani Dharmaprani

“Artificial intelligence and the algorithms that drive it represent the forefront of technological innovation, with the potential to revolutionise how we live, work, and solve complex problems,” Dharmaprani said.

Understanding the engineering principles powering these intelligent systems is important to help engineers guide the beneficial and ethical development of AI.

AI is a branch of computer science focused on developing machines that will receive and analyse inputs and provide outputs. It is designed to make decisions and exhibit intelligent behaviour by learning patterns from data, rather than being explicitly programmed, according to Adam Amos, Director at Robotic Systems. 

Amos specialises in the design and manufacture of AI-powered industrial hardware.

“Artificial intelligence and the algorithms that drive it represent the forefront of technological innovation.”
Dr Dhani Dharmaprani

AI can complete simple and complex tasks and is involved in most industries and processes, from manufacturing and logistics to marketing, finance, health care and transportation. 

“It spans everything from virtual tasks like recommendations on social media platforms to the physical world and doing things with hardware that can then be automated and optimised by applying AI,” Amos said.

To carry out its tasks, an AI system relies on an algorithm, the programming that tells the machine how to analyse the inputs in the form of data, perform certain tasks and make decisions in relation to the output.

Adam Amos

Algorithms can help predict patterns, calculate accuracy, analyse trends and optimise processes.

“It doesn’t learn on its own or change its behaviour day to day,” Amos said. “It’s the opposite: it does the same thing every day, it just does it better than the way it did it previously.”

AI algorithms are at work in a vast array of places: the YouTube and Netflix recommendations that keep us binge-watching TV, the endless scroll on Instagram or TikTok, recommendations for what to buy on retail sites, financial trading, medical diagnosis, self-driving cars, fraud detection and so much more.

Intelligent systems

Algorithms follow a set of instructions to carry out a process, whether it’s solving a problem, generating an output or achieving a specific goal. 

“At Robotic Systems we have a library of 408 different algorithms that we have available when approaching a new project and we choose one that most closely fits our purpose on a project,” Amos said.

Mathematics and statistics are fundamental for this process, as well as a programming language like Python that’s commonly used in AI development.

“It’s heavily geared towards data science, which is what you need to bring AI to life.”
Adam Amos

AI is different to traditional software development because it shifts from telling a computer what to do to teaching it what to do. 

“It’s heavily geared towards data science, which is what you need to bring AI to life,” said Amos.

To teach the algorithm, it’s necessary to first have data – and if that data doesn’t already exist, it needs to be collected. It is then a step-by-step process to develop, test, hone and apply the algorithm. Although applications vary, there are certain steps that typically define the way algorithms work.

Establish input data

An algorithm starts with an input, which may be a simple or complex data set, such as numbers, text or images. The first step is to collect the data and then label the data to identify points of interest and things the algorithm should learn. Because AI learns from data rather than being directly programmed, the data is used to teach the algorithm.

Define the process

Identify the steps that the algorithm is going to take to achieve the objective, such as: find X, then find Y and then find Z. Once these are found, calculate X + Y / Z. The sequence of steps is defined so the algorithm can carry out its work on the input data.

Execute the process

The algorithm will work through the steps, which involves decision-making, choosing different paths and responding to certain conditions.

Refine and optimise

Depending on the type of algorithm, it may be refined and improved by the human operator until it is finalised and set to work. If it is a self-learning algorithm, it will learn each time it’s executed.

Generate an output

After processing the input through the series of steps, the algorithm produces an output. It may be a solution to a problem, a recommendation, text, code or image or product of its pre-set process.

Repeat the process

The process is usually executed over and over again, sometimes with changes in each iteration, or if the algorithm has an endpoint, it will work until it has completed the task.

Different approaches

Not all AI algorithms are the same and they can be grouped into several different categories, depending on the way they’re trained and how they carry out their programmed processes. 

The different algorithms have very different applications and results, from improving diagnosis and accuracy in industrial settings to encouraging human engagement. 

However, with reinforcement algorithms – the kind that can be used on social media platforms – repeating the process and responding to human engagement doesn’t inherently produce the best results. 

“We’ve actually had some fairly disastrous results for humanity, because those algorithms prioritise showing you what you watch, not what you like or is good for you,” Amos said. “So you may see a crash video or something like that, and you can’t help but watch it, and then it just shows you another and another.”

Supervised learning algorithms

These use labelled data for training and to predict outcomes. The algorithm learns the mapping from input data to produce labels based on examples provided by a human during the training process. For example, find all the red balls in a ball pit. Humans would label all of the red balls in multiple images of a ball pit, thus mapping the input to the output. These can be used in credit scoring, disease diagnosis, sales forecasting and cybersecurity.

Unsupervised learning algorithms

These identify patterns in data such as clustering and dimensional reduction, where the model is trained on unlabelled data. The goal of unsupervised learning is to find hidden structure or patterns within the data. This is the world of ChatGPT and other large language model generative AI tools.

Reinforcement learning algorithms

These are more commonly used in applications like YouTube and social media recommendations, where the algorithms continually learn based on user behaviours. These include value-based, policy-based and model-based algorithms. They can be used in robotics, autonomous vehicles, logistics, digital advertising and manufacturing.

Powering ChatGPT

Created by OpenAI, an American AI research organisation founded in 2015 and headed by Sam Altman, ChatGPT is a type of generative AI, which means it creates an output in the form of text, image, audio, video or code.

It’s a large language model (LLM), a type of artificial intelligence designed to work with a large number of human language parameters at scale, and which uses a mathematical model trained on text data to learn patterns, structures and nuances of language.

GPT stands for generative pre-trained transformer, which refers to its ability to generate output from prompts. It’s trained for specific tasks using a transformer model to understand the context and relationship between words in a sequence or sentence, even if they’re far apart.

The algorithm takes sequential text data and, using a transformer mechanism called “attention”, learns to predict the next word in a sequence based on the words it has seen so far.

“That’s one of the other defining features of this type of AI: you don’t need to use input parameters in some specific data format.”
Dr Ben Swift

Through extensive training on diverse text data, it selects words in its output based on the probabilities learnt during training. The number in ChatGPT 3.5 or 4 refers to the version of the algorithm.

The chat function has captured the imagination, but the powerful aspect of these AI models is the number of parameters they can use, according to Dr Ben Swift, Senior Lecturer at the School of Cybernetics at the Australian National University. 

Dr Ben Swift

Because the parameters don’t have to be programmed, with ChatGPT and other generative AI tools, the parameters influence how the model behaves. 

Essentially, the parameters have their assumptions about how the world works baked in and work in our currency.

“That’s one of the other defining features of this type of AI: you don’t need to use input parameters in some specific data format,” Swift said. “It’s using human text and it’s conversational.”

Essentially, these models learn through the information fed into them, so there are fewer restrictions on the volume of inputs. 

“They’re fed billions and billions of inputs and learn statistical patterns and features of the data that’s fed into them,” Swift said.

A scene from Panic, an interactive artwork at ANU’s School of Cybernetics that uses AI.

An ethical future 

The release of ChatGPT has sparked considerable interest in AI and brought into focus concerns about bias and how to ensure the technology is developed ethically. In response, numerous countries are developing ethical guidelines for AI. 

In Australia, the federal government has initiated an AI Expert Group for Ethical Guidance to ensure safety and accountability. 

In the US, an AI Directive establishes guidelines for trustworthy AI development, while the European Union has approved legislation setting out clear requirements and obligations.

Ethical development requires stringent standards that emphasise fairness, transparency, accountability and privacy, according to Dharmaprani. 

“Equally important is capturing diversity within AI development to reduce biases and ensure the technologies are equitable and beneficial for all segments of society,” she said.

The key is to align AI development with societal values. 

“This includes a strong emphasis on transparency, rigorous testing, and exploring the implementation of AI guardrails in high-risk settings to safeguard AI systems,” she said. 

One of the other issues, according to Swift, is that the algorithms themselves are not accountable for the decisions they make. 

“Having appropriate accountability structures built into the system is a key part of working with and designing these systems,” he said. 

Responsible design requires thinking through who benefits and who might get hurt, and especially, who’s going to be accountable for the decisions made because that really clarifies how one uses the inputs and outputs of any AI model.

“These systems don’t exist in isolation,” Swift said. “Good engineering means thinking through the broader system and context where the technology may be applied.” 

Tags: machine learningcybernetics
Previous Post

Meet the researchers transforming recycling waste into energy-smart bricks

Next Post

How Bluebeam is helping plug the national skills shortage

Rosalyn Page

Rosalyn Page

Next Post
How Bluebeam is helping plug the national skills shortage

How Bluebeam is helping plug the national skills shortage

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

    WANT CREATE DELIVERED DIRECT TO YOUR INBOX? SUBSCRIBE TO OUR NEWSLETTER.

    By subscribing to create you are also subscribing to Engineers Australia content. Please find our Terms and conditions here

    create is brought to you by Engineers Australia, Australia's national body for engineers and the voice of more than 120,000 members. Backing today's problem-solvers so they can shape a better tomorrow.
    • ABOUT US
    • CONTACT US
    • SITEMAP
    • PRIVACY POLICY
    • TERMS
    • SUBSCRIBE

    © 2024 Engineers Australia

    No Result
    View All Result
    • Technology
      • BIOTECH
      • COMMUNICATIONS
      • COMPUTING
      • IMAGING
      • MATERIALS
      • ROBOTICS
      • SOFTWARE
    • Industry
      • DEFENCE
      • INFRASTRUCTURE
      • INNOVATION
      • MANUFACTURING
      • POLICY
      • PROJECTS
      • TRANSPORT
    • Sustainability
      • ENERGY
      • ENVIRONMENT
      • RESOURCES
    • Community
      • CULTURE
      • PEOPLE
    • Career
      • EDUCATION
      • INSPIRATION
      • LEADERSHIP
      • TRENDS
    • About
      • CONTACT
      • SUBSCRIBE
    preload imagepreload image