Techniques For Painting Lifelike Animals In Photoshop With Detail – When I first started playing with Stable Diffusion’s text-to-image tool in August 2022, my immediate reaction was, “ZOMG! I need to paint my art wall!” Only then face-plant immediately, because it is quite difficult to tame the constant vanilla diffusion. If you are going to copy a certain topic, you need to use additional strategies and techniques that were not available at the time.
In recent months, several new community projects have emerged that aim to give AI artists full creative control over the visual effects they are trying to bring to life. One such technique is LoRA (Least Adjustment). I explored the use of LoRA in my post on creating self-portraits with regular distributions and combining painterly styles.
Techniques For Painting Lifelike Animals In Photoshop With Detail
A more popular technique is the Dreambooth, which we will focus on in the rest of this blog post. I will follow my entire workflow/process to bring sustainable spreads to life as high quality framed prints. We’ll cover creating artwork with Dreambooth, Sustainable Diffusion, Amazing coloring and coloring, Enlarging, Preparing for print with Photoshop, and finally printing on art paper with the Epson XP-15000 printer.
Pet Oil Painting Photoshop Action
Dreambooth is a detailed technique for artificial intelligence models that propagate from text to images. Essentially, this means that you can “optimize” an already available open source stable distribution model to produce reliable and consistent images of your defined themes and patterns.
If you’re interested in this sort of thing, I recommend reading the Dreambooth paper, which can be found here https://arxiv.org/abs/2208.12242; While there are some technical parts, they include lots of sample images to help you get a feel for what’s possible. I found the Dreambooth paper very inspiring and it led me to create this artwork and write this blog post. Probably because there are many examples of dog pictures 😅. I’ve included some pictures of them on paper below.
It’s like a photo booth, but once you get the subject, your dreams can take you anywhere. – https://dreambooth.github.io/
In this project and post, we’ll be training the Dreambooth model on an image of my best friend 🧀 Queso.
How To Paint A Dog Portrait Online Course
Queso is a very bright and sweet English Cream Golden Retriever and the best boy ever, which makes him the perfect subject for Dreambooth custom model training!
The first thing you need to train a custom Dreambooth model is a “high-quality” image training package. I’ve seen pretty good results with less-than-ideal images in the past, so I put the high quality in quotes. However, common practice is to select multiple images of your subject in different poses, environments, and lighting conditions. The more variety (poses, environments, lighting) you have in your subject, the more general and versatile your ideal Dreambooth design will be.
On paper, they use 3-5 images to train Dreambooth models. but it is often used more in society. So I collected 40 pictures of Queso in different positions, lighting and environments.
Some were in very similar environments, so I chose to cut out the backgrounds of my images, and in my first experiments I found that elements of these backgrounds started showing up in the images I was creating. This is very optional and I wouldn’t recommend it unless you have a problem. I was able to do this very quickly in Photoshop
Painting A Realistic Cat On The Ipad
File is downloaded and uploaded to s3 and I can reference it by URL. This is important because in the next step we will pass the URL of this zip file to the Dreambooth training task.
Below are pictures of my Queso tutorial. Isn’t he the most beautiful boy you’ve ever seen?
For my Dreambooth training adventure, I chose to use Replicate (as I have done in my last few posts). Replicate is ideal for such projects because it reduces the pain of working with cloud GPUs and manually installing and configuring everything. You just send an HTTP request after the job is done without having to think about the GPU or the termination event. Replicate has a semi-documented Dreambooth training API, which is described in this blog post.
If you’re brave and just want to take a deep dive, I suggest you try the Google Colab Laptop for fast and stable distribution.
How To Draw & Paint Birds In Watercolor: Develop Basic Techniques & Improve Your Painting Skills
: https://github.com/TheLastBen/fast-stable-diffusion. They have a notebook for training Dreambooth models and the Automatic1111 Stable Diffusion web interface for quick turnaround.
Following the Replicate Dreambooth documentation blog post, I made a quick bash script with hardcoded input.
A recent bash script from the Replicate blog. Below that, I’ve included a breakdown of the various parameters I use. If you are interested in more advanced training parameters, detailed documentation for each parameter can be found here: https://replicate.com/replicate/dreambooth/api
This script will take ~30-40 minutes to run from start to finish, so you might want to take a break and hang out with your furry friend. Unfortunately, multiple training steps mean more training time, and 4,000 steps is a lot.
Transform Your Photos With The 15+ Best Oil Painting Photoshop Actions Graphic Design Junction
A field in the JSON request body. Once the tutorial is complete, a private replica template will be created with a URL like https://replicate.com/jakedahn/queso-1-5 (I leave my personal information behind, so it returns a 404). After creating this template, you will be able to create images through the Replicate web UI or the Replicate API.
Tom! If you’ve followed along this far, you should have your very own Dreambooth template! This is the fun part: creating a ridiculous number of pictures of your furry friend.
First we need to write a few prompts and then we can create hundreds or thousands of images 😱.
Seeing as I’m the world’s worst speed mechanic, I took the easy way out and drove an hour on Lexica’s infinite scroll wheel. Lexica is a huge collection of AI-generated images, all shared with prompts. After some time, I selected ten images that I thought were good from the search terms
I Painted This Horse Today In Procreate. Haven’t Tried Painting Realisticly In A While But It Sure Was Fun And Relaxing. Thought About Adding More Detail But I Don’t Think It Would Have Given It More
I then wrote a super-fast/evil Python script that repeated each of these messages ten times, generating a total of 100 images. I’ve done this many times…I never get tired of looking at AI dog art.
After running this script a few times, I generated at least 1000 images. I’d say ~20% was nonsense and 80% was cute, funny or accurate. Here are some of my favorites:
Finally, after making hundreds of fake Quesos, I settled on this one. I love the color palette, it’s vibrant and contrasting. I love the texture and all the fine lines and details. It also captures the Queso’s eye very well, which is what sold me in the end. Every time I see it, I’m like, “Dang, that Queso!”
Now the ultimate goal of this art project was to produce a high quality painting that I could hang on my art wall. Although cool, this photo doesn’t make a good fine art print. Awkward cuts at the top and bottom limit its potential. And if I wrote
How To Create An Animal Portrait Of A Human In Photoshop
So the next step in this project was to fix the awkward crop. Boy, it sucks that we can’t just add new pixels to the top and bottom.
Outpainting is a technique for creating new pixels that seamlessly extend the boundaries of an existing image. This means we can create new pixels at the top and bottom of our image to get a full Queso rendering. As I understand it, and I may be wrong 🤷♂️, painting for diffusion models was first implemented by OpenAI.
I haven’t used Dall-E 2 much so I wanted to give it a try and see how it did when painted. In my opinion, the user experience and interface of OpenAI outpainting is the best I have tried. but I wasn’t a big fan of the resulting pixel effects. Here I have a small video clip where I added pixels to the top of my image, but the overall result was a bit too cartoonish for my taste. they also made Queso look like they were wearing tiaras.
Then I tried using the Automatic1111 Stable Diffusion WebUI from the notebook mentioned at https://github.com/TheLastBen/fast-stable-diffusion. Automatic1111 UI is the most fully featured and extensible UI in the community, so I thought paint would work well™️. I was wrong 😐. It seems to take the top and bottom rows of pixels and extend them from 512px to 1344px.
How I Used Stable Diffusion And Dreambooth To Create A Painted Portrait Of My Dog
Finally, I tried the Draw Things Mac app. I really like Draw Things. It does a lot of what Automatic1111 does, but has a better UI and runs natively on the M1/M2 Macbook Pro for free. However, I couldn’t get the UI to work other than drawing 😐. So I ended up using it
It’s a little different from the pictures here. Because I got excited and started playing with paint (which I’ll talk more about later) before I expanded the image. Don’t worry about it!
It worked great! I was like that