Forum Replies Created
-
AuthorReplies
-
February 22, 2025 at 2:21 pm #425
stableartai
ParticipantAs of 2025 there are now hosting sites that generate website using AI for everything include the art work and animation.
March 17, 2024 at 8:42 am #402stableartai
ParticipantSo have been learning more about SD and some of the default tools and features.
Currently at the top of list:
Prompt and negative prompt being the most important.Running 8GB vs 16GB is a game change for VRAM and getting things done with higher resolution and sampling in a shorter time. We are sure that our system is slow, but please tell us.
Simple 1 image generation time: 35s to 1m30s
Image dimension is not limited to square, but some checkpoints and or lora might have limits.
March 3, 2024 at 2:02 pm #417stableartai
ParticipantFind the balance in steps appears to be important too to creating less flaws, the default was 20 steps. Experimented with different ranges(GPU VRAM is a big plus in generation time), we have found 50 to be good for many checkpoints.
February 26, 2024 at 2:18 pm #423stableartai
ParticipantThe question by hichh01 from DA is:
Is it possible to use artificial intelligence images on our sites and are they considered unique or not?Answer:
The debate may still be out, but it most likely in that gray zone of like FAN Art. Best source so far is wiki: https://en.wikipedia.org/wiki/Stable_Diffusionscroll to bottom Usage section
February 23, 2024 at 2:04 pm #419stableartai
ParticipantIf you haven’t made the move to SD 1.5 locally, just an FYI SDXL 1.0 is now apart of it. So you can use SDXL extension and SDXL checkpoints and LORA!😁
February 21, 2024 at 2:39 pm #432stableartai
ParticipantOk one of the biggest tips can provide after now over 2 weeks as a neophyte and adding new checkpoints and lora to the app. “Preview”!
Checkpoint and Lora details
- Add notes and a preview image so you can get an idea of why you grabbed the one and version and also have access to recall some tag and settings to help make the fine adjustments.
February 11, 2024 at 7:49 am #434stableartai
ParticipantFrom mistdragon070 on DA
Edited Feb 11, 2024
ive been doing stable diffusion art lately, some interest stuff I found is:
controlnet is the best thing not only for poses but also smaller things like controlling the lines themselves to make overlapping arms/hands/etc
a lot of prompts depends on your model so looking at examples of the model on civtai and other’s pics of it gets a feeling of the type of prompts the model works best for, for anime a lot of them train on danbooru, so any tags u find there works
if you’re trying to control the color from one image to another, the best thing is img2img, advance technique would be use that with controlnet to control the overall “lines’ on the picture as well
if you’re an artist and know 3d, dont be afraid to actually use 3d in your artwork, theres a lot of poses that AI cant come up with out of the box but using a 3d model as a base through controlnet can get u there
weird random info consistent colors matter a lot depending on model + vae, certain models have almost a default coloring to them and u can keep them by using similar prompt wording each time2070 works fine for me if anyone wants to be super potato :V i even have a laptop lol
i work a lot more on consistent character and certain background perspectives because this is all tied to visual novel development, not sure if I can promote if you look at my profile i have a reddit link on background process as well as the overlapping limps stuff with controlnet. 😀 GL HF!
February 10, 2024 at 7:56 am #436stableartai
ParticipantPerformance observation:
With sampling set to 85 and resolution 720 x 1080 without memory mod, a batch run is doing about 1min 40sec to render each image. This was far better than when running 8GB VRAM now on 16GB VRAM.
-
AuthorReplies