It’s now well established that LLMs are pretty nifty at writing code, but it turns out that they can begun code as well — and without external help at that.
LLMs can be taught to debug the code they’ve written, a paper by Google and Berkeley and Google researchers has found. Titled “Teaching Large Language Models to Self-Debug,” the paper proposes a concept it calls “self-debugging”.
“Large language models (LLMs) have achieved impressive performance on code generation. However, for complex programming tasks, generating the correct solution in one go becomes challenging, thus some prior works have designed program repair approaches to improve code generation performance. In this work, we propose SELF-DEBUGGING, which teaches a large language model to debug its predicted program via few-shot demonstrations,” the paper says.
“In particular, we demonstrate that SELF-DEBUGGING can teach the large language model to perform rubber duck debugging; i.e., without any feedback on the code correctness or error messages, the model is able to identify its mistakes by explaining the generated code in natural language,” the paper continues.
The paper says it managed to produce good results with the technique. “Self-Debugging achieves the state-of-the-art performance on several code generation benchmarks, including the Spider dataset for text-to-SQL generation, TransCoder for C++-to-Python translation, and MBPP for text-to-Python generation. On the Spider benchmark where there are no unit tests to verify the correctness of predictions, Self-Debugging with code explanation consistently improves the baseline by 2-3%, and improves the prediction accuracy on problems of the hardest label by 9%. On TransCoder and MBPP where unit tests are available, Self-Debugging improves the baseline accuracy by up to 12%. Meanwhile, by leveraging feedback messages and reusing failed predictions, Self-Debugging notably improves sample efficiency, and can match or outperform baseline models that generate more than 10x candidate programs,” the paper says.
Now there are already several concerns that lots of coding jobs could be lost with LLMs being faster and more efficient than most software developers. But it appears that with the right instructions, LLMs can automatically improve their code, and fix their own mistakes. This isn’t just concerning for coders everywhere, but even for the general population — how many other tasks are out there that LLMs can themselves get better at? And how good will they eventually be able to get?
Pingback: Listen: AI Generates Song Sung In Drake's Voice - MagicWand AI
Pingback: Google Says Bard Will Now Help Users Write Code, Debug Programs - MagicWand AI