“It would be straightforward to recreate the logic of bombes in a conventional program,” Wooldridge said, noting the AI model ChatGPT was able to do so. “Then with the speed of modern computers, the laborious work of the bombes would be done in very short order.”
He’s speculating that an LLM could write a program to do so.
Using a slightly different approach – that Wooldridge suggested might be slower – researchers have previously used an AI system trained to recognise German using Grimm’s fairytales, together with 2,000 virtual servers, to crack a coded message in 13 minutes.
In late 2017, at the Imperial War Museum in London, developers applied modern artificial intelligence (AI) techniques to break the “unbreakable” Enigma machine …
So this is about research from 8 years ago! They go on to explain that they did a brute-force attack on the key, using a RNN (recurrent neural network) classifier to detect if the decrypted text looked like German.
I’m no cryptographer but I’m pretty sure that we have been able to classify language samples quite successfully for a long time using much simpler (and faster) statistical techniques, like n-gram frequency analysis.
The fact that the Guardian article mentions none of this and presents the topic ambiguously enough to make it sound like ChatGPT can break ciphers on its own makes me think this is more deliberate AI-hype.
No, LLMs can’t decipher Enigma ciphertext.
He’s speculating that an LLM could write a program to do so.
The link is to an abstract that tells you nothing more without an account on this website. But a better write-up of the mentioned research is here: https://www.digitalocean.com/blog/how-2000-droplets-broke-the-enigma-code-in-13-minutes
So this is about research from 8 years ago! They go on to explain that they did a brute-force attack on the key, using a RNN (recurrent neural network) classifier to detect if the decrypted text looked like German.
I’m no cryptographer but I’m pretty sure that we have been able to classify language samples quite successfully for a long time using much simpler (and faster) statistical techniques, like n-gram frequency analysis.
The fact that the Guardian article mentions none of this and presents the topic ambiguously enough to make it sound like ChatGPT can break ciphers on its own makes me think this is more deliberate AI-hype.