It blurs the line between human and machine, and it has upended an entire industry to both acclaim and outcry. Yes, we’re talking about Auto-Tune.
It’s a familiar story for engineers, with one exception: it’s a story that comes to life with a bass-heavy beat and the voice of Cher.
This is a story about a computer program called Auto-Tune, software that singers can use in the recording studio to go back and fix any off-key notes in their performance.
It’s the subject of some controversy, it has transformed the pop music landscape, and it exists because an electrical engineer found a new use for some mathematical algorithms he had been using in his work in the oil industry.
“An extreme setting”
Artists and producers had been using Dr Andy Hildebrand’s invention for about a year before Cher released her single “Believe” on 19 October 1998, exactly 20 years ago today.
At first, singers used Auto-Tune to imperceptibly correct bad notes, the way Hildebrand had intended. But “Believe” took advantage of an unusual setting that Hildebrand included on a whim, which shoved the singer’s voice into the right frequency the instant the program noticed something was wrong, rather than permitting the correction to ease in at a more natural pace.
The effect was unearthly: it made a human voice into something robotic and strange.
“It’s an extreme setting,” Hildebrand told the Seattle Times in 2009.
“We didn’t think anybody would do that, but apparently it’s a popular thing nowadays.”
Before his turn as a pop music game changer, Hildebrand studied electrical engineering, earning a PhD from the University of Illinois in the US in 1976. From there, he worked for oil giant Exxon before starting his own geophysical consulting company, which would ultimately be bought by energy giant Halliburton.
His work at Exxon involved reflection seismology, a technique for mapping underground spaces by analysing the way sound waves interacted with one another.
“That computation allows oil companies to use seismic data to map subsurface strata to find oil,” Hildebrand said.
He expanded on the process further during a PBS question-and-answer session in 2009.
“Seismic data processing involves the manipulation of acoustic data in relation to a linear time varying, unknown system (the Earth model) for the purpose of determining and clarifying the influences involved to enhance geologic interpretation,” he explained.
“I was working in an area of geophysics where you emit sounds on the surface of the Earth (or in the ocean), listen to reverberations that come up, and, from that information, try to figure out what the shape of the subsurface is,” he told Priceonomics.
“It’s kind of like listening to a lightning bolt and trying to figure out what the shape of the clouds are. It’s a complex problem.”
He proved his value when he saved Exxon half a billion dollars by fixing a delay on the company’s Alaskan pipeline caused by faulty seismic monitoring instrumentation.
Engineering sweet, sweet music
As it turns out, the same mathematical principles that could locate oil deposits could be re-imagined to strike pop gold. Hildebrand emphasised to electronic music publication Thump that he had applied his specialty to quite different fields.
“You would say that I’m a practitioner of digital signal processing and I’ve applied that to geophysics, and I’ve applied it to music,” he said.
But Hildebrand does see a connection between engineering and music.
“If you go to a university symphony orchestra and ask the engineers and mathematicians to raise their hands, half the orchestra will raise their hands,” he told the Seattle Times.
“It’s the ability for the mind to do symbolic abstraction.”
An accomplished flautist himself (he paid for his education with music scholarships), he combined his areas of expertise when he founded audio technology company Antares. And it was there that he devised — as the patent refers to it — a “pitch detection and intonation correction apparatus and method”. The technique uses a type of mathematical analysis called autocorrelation to detect pitch in real time.
But as Cher demonstrated, Auto-Tune’s real legacy has been the result of artists deliberately misusing it, creating strange sounds that could never exist naturally.
Rappers like T-Pain, Kanye West, Lil Wayne, Young Thug and Future have used Auto-Tune to warp their voices into odd angles; for a genre like hip-hop, Auto-Tune presented a new way of playing with language by breaking it down into the individual phonemes.
Famous or infamous?
But some listeners and artists resisted. Many music fans decry the ubiquity of the technology, and the indie rock group Death Cab for Cutie made a tongue-in-cheek protest at the 2009 Grammy Awards, wearing blue ribbons they said were a tribute to the ‘blue note’ Auto-Tune could not produce. (In a subsequent update of the software, Antares introduced a setting that could capture such a sound.) Meanwhile, rapper Jay-Z recorded a single announcing the “Death of Auto-Tune” — though Hildebrand’s software has survived nine years beyond that eulogy.
For Hildebrand’s part, he recognises the sound isn’t to everyone’s taste, but also that its longevity speaks to some enduring appeal.
“I just build the car, I don’t drive it down the wrong side of the road,” he told Thump.
Auto-Tune is, if anything, even more popular outside the West. It has become widespread in Arabic and African music, and it’s also a prevalent feature of Caribbean sounds.
create requested an interview with Hildebrand, but according to an Antares representative, he “sold the company a few years ago and retired to a tropical island”. At the time of publication, Hildebrand had not responded from his getaway.
Depending on the location of Hildebrand’s island home, his invention might have followed him. Even the creator can’t escape his creation.
And now, for your listening pleasure: