AI Notice - Use of AI

This notice is to notify users of the extent that I use Generative Models.

Please note: I do not support the use of Generative Models to CREATE content. I heavily believe that generative models created to create content are an abomination and soulless. However, I do believe that models used to enhance an artist's workflow (as noted below) are a helpful set of systems that eliminate the tedious issues that happen with photoshoots that are hard to notice, and hard to control.

The use of these tools that I list below are, in my personal opinion, not designed to replace or supplant humans, and are AOK. If you believe all AI Tools are a bane on artistry in general, I respect that. However, I am a quite busy person, with a full time job. I wish to put my content out into the world, and if I completely cut myself off from that, the amount of content that I could produce would significantly decline, and, with my psyche, I would eventually abandon creating content due the ever-increasing amount of content put out there that I could not contribute to.

This notice is intended to be a general notice on how artificial intelligence is use in my workflows. If an individual image uses generative models beyond what is noted below, that individual image will contain a notice on what extra models will be used.

Photography

Generative Models used in my photography workflow.

Auto-Levels

I use the Adobe's "Adaptive Color" profile when I render pictures coming from the camera. This gets the lighting levels to roughly the correct level. Usually after this, I need to adjust the white balancing (if the area has multiple temperatures of light), or light levels (if the area is not lit the same throughout). As far as I can tell, this doesn't do any generative changes to my content, it just simply determines the best levels to start with.

The main reasoning for using this profile is that a lot of my photography is taken outside when I cannot control the light levels, and having to individually change the light levels for every picture is a time-consuming process.

Heal Tool

In Adobe Camera Raw, I have access to the Heal Tool which allows to remove certain items, like Dust, Reflections, and Lens Flares. These are pesky to remove, but I use them if it is too much of a distraction. I do not use it to create items that do not exist. I only use the Heal Tool when its something on the subject that can't be cropped out.

For items that exist off to the side, such as a person that isn't in the photoshoot, I opt to crop them out of the picture rather than doing a Heal Tool removal process.

This tool is the closest I get to using Generative models in my art. If a client wishes that I do not use the Generative Heal Tool, I will use the old-fashioned Heal and Clone tools which require the manual selection of pixels to move around. It takes singificantly longer to edit pictures that require the old version of the Heal and Clone tools, but I am happy to do it if the client has apprehension against using any Generative models in my photography.

Denoise

I like shooting at night, but there's something called ISO that improves light without requiring me to have a stable platform to decrease the shutter speed to compensate. Any shot with an ISO >1600 has a degree of noise which Adobe has a "Denoise" utility that removes noise from low-lighting, high-ISO pictures.

As far as I can tell, this doesn't create or modify the image, but simply smooths out nearby pixels. This is more of a calculation model rather than a generative model (like LLMs).

⚠️

The internet is always growing, and a recent study showed that half of all new online articles are generated LLMs. I do use images from the internet as background-replacement images, or adding different things into the scenes I capture. While I do not intend to use Generative Models (as I do not generate content), its growing increasingly problematic to find pictures that do not use a Generative Models. Using Stock Photos (such as Adobe Stock) is the best way that I have thought to avoid the accidental use of Generative Models, as they have Mandatory Labeling of all use of Generative AI Tools, but its not perfect.

If you discover that any of my art is using images or content that is generated by AI, please contact me via Instagram, Threads, Facebook, or Bluesky and I will immediately take down the image and replace it with content that does not use content made with a Generative Model.

Programming

Generative Models used in my programming/coding workflow.

ChatGPT

I use ChatGPT primarily as an advanced search engine. There is a lot of really weird edge cases that I come up with, and, while I know a majority of what I'm programming, I use ChatGPT to find how to do specific things.

Every line of code that ChatGPT generates is reviewed prior to implementation. While I do Copy-Paste some code, I more opt to rewrite the code, as ChatGPT tends to write unmaintainable code.

Advanced Review

LLMs started from a place where it can look at content and review it, rather than actually generating content. Models like DeepMind were designed to actually classify images, and I do use that to generate tags which make sense for things that aren't as specific.

For example, in a weather application, I used ChatGPT to classify if an image showed "sunny", "cloudy", "rain", "extreme weather", etc. I pulled a bunch of images from a stock site, but I needed a programmatic way of going through each of these images and classifying them. Using the ChatGPT-4o model, I was able to upload each individual image and get an output that gave me the classifications needed, rather than looking at every individual image (I had downloaded 100+), and classifying them that way.

Commenting

Again, LLMs are really good about taking in text and figuring out what the jist is pretty easily. I use LLMs to comment my code quickly. I'm not good about making things English (because English is a silly language). However, LLMs have this uncanny ability of being able to read my code and write something that makes sense quickly.

Unit Tests

Again, LLMs are very good about taking in code and determining what needs to be done to test to make sure you didn't screw things up. I feed code into coding models to write unit tests, and, while it does create unit tests I *probably* would have written given enough time of hitting my head into a wall, it also tends to create Unit Tests that I haven't thought of.

🔐

Under NO circumstances will I ever use any generative model to work on security-critical code. I mean... There's plenty of articles explaining why you should never give the wheel to LLMs when you are working on security-critical code.