The AI Controversy

In an effort to get more people to read the rest of this, let me say I don’t write with AI. Occasionally it helps me when I’m stuck, but so far nothing it’s ever produced has been good enough to actually use. At best it gives a new direction of thinking. I’m fairly certain that’s not going to change. That’s not because I have deep seated principles against using AI (that has paid copyright dues for its training sets). Rather it’s because I see inherent limitations to the current approach to AI technology. From this point on, the current strategy seems unlikely to manage more than incremental improvements from here on in. I’m saying that from an in depth understanding of its workings and a lot of experiments. I’m happy to discuss the specifics of that opinion, but it’s not the subject of this page.

I’m a fan of technology, so naturally I’m interested in new software like generative AI in its many forms. I’m always a little piqued at generalist statements like “I hate AI” or “I don’t follow anyone who uses AI”, both of which are examples I’ve seen come by on Bluesky. The idea that ‘it’s impossible to ethically use AI’ is a unacceptable generalisation. The way AI is discussed in the creative community evokes images of people frothing at the mouth and being consumed by fanatical rage, convinced that AI is a great evil. That’s not an effective way to deal with it.

The creative community is especially concerned about AI, and rightly so. The truth is that the companies who are developing AI have been using a hell of a lot of pirated material to fill their training sets. By our current laws, they ought to be paying for that, that much is obvious. If they’d ever stopped to think about what they’re doing, they would have realised that the way they are employing training sets requires rights to use the material, in almost exactly the same way that a library must purchase the books that it lends.

The basic and historical premise of intellectual property laws – namely that creatives can have some reasonable expectation of compensation for their work when it is enjoyed by others – is sound. However, this is where my opinion does seem to deviate a bit from what I’m seeing around me. I see this in the same way as a library that has to buy whatever it lends to people, this system is a model for the rules that ought to apply to a training set.

Now, should the output of an AI automatically be considered a copyright breach? I’d say of course not, and the reason is simple. All work in history is inspired by what came before it and a lot of that inspiration exists because of the hard work of others. Whether the resulting work is a breach of copyright depends on how closely it resembles the pre-existing work and these rules should apply to AI generated material the same way they apply to anything else. When you publicise something, it’s not reasonable to expect complete control over what it’s used for. There’s no historical, reasonable or practical case where that’s ever been true. If you publicise work, you relinquish a great amount of control (all of it?) and recognition of that fact is the reason why Intellectual Property law exists.

It needs to be emphasised that using technology to assist in the making of creative works is not new. AI is a technological tool that only really differs from a mathematical filter in an art program, a spell checker or countless other technological tools in complexity. Before you go ‘but the training set’, every tool has to be tested – you need to test it on real world examples to tune it and this is not different from a training set in any meaningful way. When the result is too close to a previous work, that may be plagiarism, but whether or not it was AI generated isn’t actually a part of the equation. It may be happen far more often with AI generated work, in a large part because AI doesn’t quote which parts of the training set it used (it should), but it doesn’t change the nature of the transgression.

Whether we like it or not, AI is almost certainly a transformative technology and not being able to employ it effectively is likely to become a limitation in the same way as computer illiteracy is. Being against it is likely to go the same way as being against the wheel as a technology to get places or fire as a technology to get warm. How good the AI’s you have access to are may even become a social divide, though that’s also a different, more general subject that I won’t get into here.

I recognise that creatives may feel that their livelihood is threatened. This is true, as it is for all professions that are not able to adapt to changing circumstances. There aren’t many blacksmiths or coopers left in business either. That’s not a pretty part of progress, but I would venture that the alternative, which is a static world and society, is far worse. However, it’s also an opportunity. It’s creatives who are best placed to master effective employment of this technology. It does have the potential to be a powerful assistant and my own experiments lead me to believe that getting something useful out of it isn’t easy. Using AI effectively requires so much time and effort, I’m personally finding it hard see much difference between the amount that and the amount of time and effort needed to master a different creative skill. The world is going to need a hell of a lot of people who know how to operate AI effectively.

What I mean to say with this, is simply that the way the creative community vilifies AI simply isn’t an effective strategy. Hard legal action is required to make sure the training sets are only filled with correctly licensed data, but considering AI as ‘wrong’ or being against it isn’t going to get us anywhere. I would also say that an AI that draws from a training set with copyrighted material should report not only which material was accessed, but also how much the result is like the raw training material, so that the operator has some way of knowing when the result is likely to be considered plagiarism. The companies developing AI need to be forced to respect intellectual property law, but the technology itself needs to be evaluated on its own merits, not by the bad behaviour of its developers. I’d even suggest it can make you a better creative if you learn to use it effectively. If you did though you’d still be responsible for making sure the end result isn’t plagiarised. Just like you are now.

Judging by the number of people who state that they’ll insta-block anyone who uses AI, this article will probably generate ire among many in the creative community. Those who want to talk about that are welcome to contact me. Those who think that’s a good reason to insta-block, that’s fine too.