This makes sense to me, and is in line with recent interpretations about AI-generated artwork. Basically, if a human directly creates something, it’s protected by copyright. But if someone makes a thing that itself creates something, that secondary work is not protected by copyright. AI-generated artwork is an extreme example of this, but if that’s the framework, applying it to data newly generated by any code seems reasonable.
This wouldn’t/shouldn’t apply to something like compression, where you start with a work directly created by someone, apply an algorithm to transform it into a compressed state, and then apply another algorithm to transform the data back into the original work. That original work was still created by someone and so should be protected by copyright. But a novel generation of data, like the game state in memory during the execution of the game’s programming, was never directly created by someone, and so isn’t protected.
I don’t see how this wouldn’t be derivative work. I highly doubt a robust, commercial software solution using AI-generated code would not have modified that code. I use AI to generate boilerplate code for my side projects, and it’s exceedingly rare that its product is 100% correct. Since that generated code is not copyrightable, it’s public domain, and now I’m creating a derived work from it, so that derived work is mine.
As AI gets better at generating code and we can directly use it without modification, this may become an issue. Or maybe not. Maybe once the AI is that good, you no longer have software companies, since you can just generate the code you need, so software development as a business becomes obsolete, like the old human profession of “computer.”