- cross-posted to:
- programming@beehaw.org
- cross-posted to:
- programming@beehaw.org
cross-posted from: https://programming.dev/post/8121843
~n (@nblr@chaos.social) writes:
This is fine…
“We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group.”
[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?
This. As an experienced developer I’ve released enough bugs to miss-trust my own work and spend as much time as I can afford in the budget on my own personal QA process. So it’s no burden at all to have to do that with AI code. And of course, a well structured company has further QA outside of that.
If anything, I find it easier to do that with code I didn’t write myself. Just yesterday I merged a commit with a ridiculous mistake that I should have seen. A colleague noticed it instantly when I was stuck and frustrated enough to reach out for a second opinion. I probably would’ve noticed if an AI had written it.
Also - in hindsight - an AI code audit would have also picked it up.
The quote above covered exactly what you just said: “yet were also more likely to rate their insecure answers as secure compared to those in our control group” at work :-)
I find that the people who complain the most about AI code aren’t professional programmers. Everyone at my company and my friends who are in the industry are all very positive towards it