Last season – simply to spark debate – Paul Needham (our resident IT expert) generated two images using Artificial Intelligence (AI) and put them into club events.

On Wednesday 25 October he unravelled some of the mysteries of AI in photography for his fellow members.

AI is a complicated subject and Paul set out to explain to us what AI is as it relates to photography and what it means for camera clubs and their members.  So, rather than debating the ethics of AI, he described some of the relevant AI technology and explained in simple terms how it works.

He also covered how it has been used in photography for many years, how it has developed recently, and how with suitable instructions in the form of “prompts” it can now generate new original images.

The computer science of AI encompasses many branches such as robotics, image recognition, and (most relevantly for photography) machine learning.  And each branch has several elements.  Machine learning encompasses supervised learning (which is task driven), unsupervised learning (which is data driven), and reinforced learning (which is learning from mistakes).

Machine learning has been around since 1959.  Broadly, it gives computers the ability to learn without being explicitly programmed.  It uses neural networks (rather like neural pathways in the brain) to understand relationships between data.  And a type of machine learning model called a “generative model” can learn the underlying patterns or distributions of data in order to generate new, similar data.  Essentially, it enables a computer to produce its own data based on what it has seen before.

As well as art creation, this is useful in discovering new drugs, in content creation, and in video games – among many other things.

Three main AI apps are relevant for photographic purposes.  Stable Diffusion, Midjourney, and DALL-E all use models which generate images from natural language descriptions called “prompts”.  Interestingly, any given prompt will generate a new and different image the next time it is used.

As a demonstration of this technology Paul included a quiz – is the picture real or fake?  That is, was it taken with a camera or was it generated by an AI app?  While there were some clues in the images, it was mostly impossible to decide either way.  It turned out, however, that all these images were computer generated!  And sometimes they were even in the style of known photographers.

Paul also included a video (with DALL-E) to show us how all this worked.

AI is already used in Photoshop, Lightroom, Facebook, Google images, etc.  It can be applied to image recognition, image improvement, lighting and colouring, and cropping and filling (eg, content aware fill) among other editing actions.  It can also reduce the burden of complicated or tedious processes (such as removing noise or making difficult selections).

Adobe have now produced their own AI app.  It is called Adobe Firefly and is free as part of their Creative Cloud set of apps.  Stable Diffusion and Midjourney are available as standalones.  DALL-E was created by OpenAI partnered with Microsoft and is included in Microsoft Bing (where it is free to use).

Paul wound up with some more video demonstrations showing how Photoshop and some of these AI apps can be used to

  • Remove distractions and unwanted elements,
  • Change backgrounds,
  • Replace backgrounds (wholly or in part),
  • Change colours,
  • Change hairstyles and clothes,
  • Add sunglasses and other objects,
  • Remove tattoos,
  • Add clouds,
  • Repair old photos, and
  • Much, much more.

Paul’s fascinating and informative presentation was well illustrated with a wide selection of AI (ie, computer generated) images.  And he clearly demonstrated just how effective this developing technology can be in the creation of – or assisting the creation of – new images.  All this gave us much food for thought!

And, naturally, AI will go on getting better.  Reassuringly, though, Paul concluded that despite the continuing advances in AI technology “there will probably always be room for human endeavours and creativity to break through”.

LBPC General Rules for entering our internal competitions already cover the use of AI.

Firstly, “All elements of the work submitted must be the work of the Author.  All assets used in an image must have been captured by optical means by the author, and the copyright of all elements of a picture must be owned by the Author.”

Nevertheless, “The use of software for editing images such as cloning, compositing, sky replacement, blurring, etc, are accepted forms of image post-processing, as these are methods that were originally used in the darkroom.  The introduction of AI/machine learning-assisted editing tools is an extension of these techniques.  Therefore, the use of these tools will be permitted to edit and post-process club images.”

However, “The use of computer software, services and applications to create a completely computer-generated image is not permitted.”

For the avoidance of doubt, the use of modern cameras and phones which might include some element of AI (eg, autofocus and face/eye detection) in their construction and the use of editing assets (such as brushes, textures, masks, etc) is permitted.