Interesting their reasoning. Not sure I completely agree on all points:
“Arguments against giving AI authorship are that software simply can’t fulfill the required duties, as Skipper and Springer Nature explain. “When we think of authorship of scientific papers, of research papers, we don’t just think about writing them,” says Skipper. “There are responsibilities that extend beyond publication, and certainly at the moment these AI tools are not capable of assuming those responsibilities.”
Software cannot be meaningfully accountable for a publication, it cannot claim intellectual property rights for its work, and it cannot correspond with other scientists and the press to explain and answer questions on its work.”
In the case of ChatGPT, I would guess that if it had a fine-tuned version linked to the paper, it actually could answer questions from other scientists and the press. Is claiming intellectual property rights even an absolute necessity when it comes to sharing scientific findings anyway?
“Meaningfully accountable” is certainly a squishy one as well. Seems like we’re in for a long drawn out battle over AI attribution and redefining authorship… Old conceptions around these things are simply going to collapse under the weight of new pressures from these emerging tools.