Software engineers who use code-generating AI systems are more likely to cause security vulnerabilities in the apps they develop, TechCrunch reported, citing a study by Stanford.
(For insights on emerging themes at the intersection of technology, business and policy, subscribe to our tech newsletter Today’s Cache.)
“Code-generating systems are currently not a replacement for human developers,” TechCrunch quoted a scientist as saying in the report.
“Developers using them to complete tasks outside of their own areas of expertise should be concerned, and those using them to speed up tasks that they are already skilled at should carefully double-check the outputs and the context that they are used in in the overall project,” the report further added.
The Stanford study looked at Codex, an AI code-generating system developed by San Francisco-based research lab OpenAI. The researchers recruited 47 developers to use Codex to complete security-related problems across programming languages like Python, JavaScript, and C.
The system was trained on several lines of public code to suggest additional lines of code and functions, given the context of the existing code.
It was found that the study participants who had access to Codex were more likely to write incorrect and insecure solutions to programming problems when compared to a control group.
However, code-generating systems are helpful for tasks which are not too risky, like exploratory research code.
Published - December 29, 2022 03:15 pm IST