Webui mask blur tutorial reddit github.
See full list on github.
Webui mask blur tutorial reddit github. Make sure you use an inpainting model. 🖼️ The tutorial demonstrates using 'a detailer' with a three-person group portrait, highlighting the importance of model selection for better hand rendering. Below are the original, mask, and my best result. I decided to do a short tutorial about how I use it. Judging from the fact that only the pictu Another example would be `Mask Blur` in inpaint clashing with `Mask Blur` from built-in script. So your inpainting can only change the image a little bit at those soft edges, less chance for ugly seams this way. 5-2X those values). Download the . Jul 9, 2024 · Unit1 setup Unit 2 setup Generation result Appendix Input images Mask Generation parameters (Parameters including seed are fixed for all examples) A collection of awesome custom nodes for ComfyUI. Depending on the image this can be more or less obvious in some cases. - v0xie/sd-webui-incantations Aug 8, 2023 · When working with Inpaint in the "Only masked" mode and "Mask blur" greater than zero, ControlNet returns an enlarged image (by the amount of Mask blur), as a result of which the area under the mas An Extension for Forge Webui that implements Attention Couple - Haoming02/sd-forge-couple It's a 128px model so the output faces after faceswapping is blurry and low res. Mask Clothes Extension for Automatic1111's Stable Diffusion Web Ui Extension for webui. /webui. comfyui-MGnodes: Assorted custom nodes with a focus on simplicity and usability including watermark node and others focused on customizing my comfy experience. When using this 'upscaler' select a size multiplier of 1x, so no change in image size. This means that it works best in situations where the model has a very rough/generalized understanding of the prompt it's being fed, so much so that it "panics" (let's call it that) by adding in extraneous details/fuzz out the details. If you set this to 0, your result will follow the shape/contours of the original image almost exactly, which is not desirable for most swaps. Reply reply myebubbles • Top Left - Original with mask, Top Right - sample at around 70%, Bottom - Final This has been driving me insane, I've played with mask blur/masked content, img2img color correction, inpainting conditioning mask strength. (SFW Friendly) Fast and Simple Face Swap Extension for StableDiffusion WebUI (A1111, SD. 3 and pyvirtualcam 0. You may also want to check our new updates on the tiny models for anime images and videos in Real-ESRGAN 😊 GFPGAN aims at developing a Practical Algorithm for Real-world Face Restoration. For the masked content, original means keeping whatever was originally there. Anything white is close to the camera, the darker you get, the further away from the camera the shape is. With simple text prompt controls, adjusting padding and batch size becomes effortless. Jul 21, 2023 · See #1597 (comment) for background. 440. 1K 136 Share Sort by: Best Apply unlimited masks to unlimited LoRA models. Apr 1, 2023 · Hello everyone! I am new to AI art and a part of my thesis is about generating custom images. Jun 19, 2024 · Stable Diffusion web UI. If it doesn't, I'd just do a quick photoshop mask, adjusting the blur/mask settings in the webui. Dec 18, 2023 · Navigating the sd webui is beginner-friendly, offering a seamless tutorial for users. May 30, 2023 · The short story is that ControlNet WebUI Extension has completed several improvements/features of Inpaint in 1. These areas are managed under the mask mode options: inpaint masked and inpaint not masked. Stable Diffusion web UI with Soft Inpainting. A workaround, disable non-needed extensions in the extensions tab that clash. works best but depending on the region and selection this may need tweaking. Mar 11, 2023 · When I set mask blur to 4 pixels, does it mean that the blurred mask will spread 4 pixels at each direction (radius=4) or the entire ramp will be 4 pixels wide (diameter=4)? If I do inpaint upload, is there any difference between specifying mask blur in webui and baking it into the mask image itself? Make inpainting easier. Control Type: Select a control type. Most of the tutorials cover the built-in mask editor and not how to make a precise mask and upload it. Contribute to akatz-ai/ComfyUI-Depthflow-Nodes development by creating an account on GitHub. If you want to use GFPGAN to improve generated faces, you need to install it separately. The area outside the mask is the non-masked area. 5 Inpainting tutorial. Parameters: Soft inpaint on, mask blur 44, inpaint masked, original. I really like cyber realistic inpainting model. See full list on github. txt2img outpainting currently still produces a visible seam if you look closely enough. Download GFPGANv1. Tts creating transparent png masks for all clothes in uploaded image. Jan 6, 2023 · Deforum extension for AUTOMATIC1111's Stable Diffusion webui - Animation Settings · deforum/sd-webui-deforum Wiki Inside the Mosaic tab, upload an image of choice Choose the direction (s) to outpaint Adjust the parameters (explained below) as needed Press Process Mosaic Once the images have been processed, press Send to Inpaint In img2img tab, fill out the captions of the image eg. Once selected, the appropriate Preprocessor and Model are automatically loaded. 12 installed in my venv and I still can't find the reason for the Ebsynth to keep failing to generate stage 1 using transparent background and when I try just the clipseg, the stage 2 fails with the same "index out of range" as @DotPoker2 Should a downgrade of the Transparent background needed? My venv was built under cpython 3. --- Inpainting conditioning mask strength: 1. same as doing it in photoshop or whatever and then importing. ComfyUI-My-Mask: Some nodes for processing masks, currently including nodes that fill in the concave parts of existing masks with convex hulls. My photoshop skills are somewhat lacking. White mask on black background. py file in case something goes wrong open cimage. In addition, I have only tested sdxl models, please be aware that there is likely to be issues with other formats. - portu-sim/sd-webui-bmab How do I rollback to controlnet 1. - ltdrdata/ComfyUI-Impact-Pack Next generation face swapper and enhancer. A browser interface based on Gradio library for Stable Diffusion. Mar 12, 2023 · New Feature: "ZOOM ENHANCE" automatically fixes small details like faces and hands! Based on GroundingDino and SAM, use semantic strings to segment any element in an image. The tutorial shows more features. I used to run automatic1111's webUI on Ubuntu, following the instructions on the github page. There are already at least two great tutorials on how to use this extension. FaceSwapLab is an extension for Stable Diffusion that simplifies face-swapping. I've seen a lot of comments about people having trouble with inpainting and some saying that inpainting is useless. So, I've been using Stable diffusion webui (automatic1111) for about a week now, and up until few days ago the inpainting worked perfectly. Latent mode seems to generate without any console 1. To achieve these letters I first draw the shape using white (or very close to white) against a black background. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Mask refers to the area you paint and wish to correct or improve. All of this happens behind-the-scenes without adding any unnecessary steps to your workflow. Contribute to f0e/blur development by creating an account on GitHub. Check out this video (Chinese) from @ThisisGameAIResearch and this video (Chinese) from @OedoSoldier. OR cutout my background while leaving my subject in place and then generate a new background around them. Why is the area that needs to be redrawn gray? Additional information: Does the parameter "inpainting_mask_invert" mean that 0 represents the masked inpainting, and 1 represents the unmasked inpainting? Inpainting with small mask (just eyes, or face at full body etc) gives no changes even with high denoise. The video also covers adjusting expansion directions, sizes, and using inpainting to seamlessly blend The secret to REALLY easy videos in A1111 (easier than you think) A sophisticated ComfyUI custom node engineered for advanced image background removal and precise segmentation of objects, faces, clothing, and fashion elements. 222 added a new inpaint preprocessor: inpaint_only+lama LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions (Apache-2. Also, go to this huggingface link and download any other ControlNet modelss that you want. Aug 30, 2022 · New inpainted stuff inside the mask should not be burred practically at all at the edges towards and outwards the edge line , and blur out real image revaling inpainted stuff underneath instead using fading of opacity/alpha channel with more and more pixels the more you push the slider to the bigger values. Th Jul 31, 2024 · What is aDetailer in Stable Diffusion? How to use After Detailer to fix garbled or distorted faces? Check this guide to know ADetailer workflow. trueturn off stable diffusion webui (close terminal) open the folder where you have stable diffusion webui installed go to \extensions\sd-webui-roop\scripts make and save a copy of the cimage. py in any text editor and delete lines 8, 7, 6, 2. Mar 11, 2023 · When I set mask blur to 4 pixels, does it mean that the blurred mask will spread 4 pixels at each direction (radius=4) or the entire ramp will be 4 pixels wide (diameter=4)? If I do inpaint upload, is there any difference between specifying mask blur in webui and baking it into the mask image itself? Oct 24, 2022 · I had experience with "Mask blur" have drastically affecting the resulting outpainted composition. pth and put it into the /stable Just a reminder that there is a new 'remove background' extension for a1111 Tutorial | Guide 1. Try to repeat outpainting mk2 with Mask blur being 0, 4, 8, 16, 24, 32, 48, 64 – and see how the result changes. 440 on a hosted service using jupyterlab? I feel like the new controlnet ip adapter face id v2 is giving me a slightly different face output to the one ive been using all this time with controlnet 1. It has evolved from sd-webui-faceswap and some part of sd-webui-roop. Feb 17, 2024 · Checklist The issue exists after disabling all extensions The issue exists on a clean installation of webui The issue is caused by an extension, but I believe it is caused by a bug in the webui The This is a Stable Diffusion web UI extension that uses NudeNet to detect NSFW regions in an image, allowing for partial censoring of the image, useful when sharing images NSFW images on social media. Here is the backup. What's the recommended way to fix the faces after upscaling (assuming photoreal)? Is it facedetailer, running another upscaler, or something else? Factors like Mask Blur and Masked Paddings play pivotal roles in fine-tuning your results. The soft mask is automatically created by comparing the original and the inpainted content. - storyicon/comfyui_segment_anything High-similarity face swapping with Stable Diffusion WebUI Forge Master face swapping with Instant-ID and IP-Adapter In our journey through the fascinating realm of face swapping technology, we previously navigated the intricacies of the LoRA model coupled with the Adetailer extension, achieving results that mirrored Scarlett Johansson with striking resemblance. However, a substantial amount of the code has been rewritten to improve performance and to better manage masks. Nothing I do changes how the mask looks like its just pasted on the original image. A stable-diffusion-webui extension for edit your mask edge Bilibili Link: sd新插件!解决生成图边缘色差问题!Segment Anything等语义分割生成的 These settings are specifically for Automatic1111's WebUI. 202, making it possible to achieve inpaint effects similar to Adobe Firefly Generati Hi. I found that using values >0 and <1 have weird color blending issues. Contribute to ComfyUI-Workflow/awesome-comfyui development by creating an account on GitHub. Contribute to CodeHatchling/stable-diffusion-webui-soft-inpainting development by creating an account on GitHub. Neither of these implement element id's or anything else that distinguishes them to be unique from each other. Much appreciated for the explanation. Download the IP Adapter ControlNet files here at huggingface. But you can influence it by changing the settings. ) Automatic1111 Web UI - PC - Free New Style Transfer Extension, ControlNet of Automatic1111 Stable Diffusion T2I-Adapter Color Control Reply reply More replies Mindestiny • I have an image that I'm really getting close to calling complete, but the eyes are red and I'm struggling with inpainting. We’re on a journey to advance and democratize artificial intelligence through open source and open science. 5 Optional - set the Mask blur (4 is fine for 512x512 but 8 for 1024x1024, etc. SAG works by de For the transition line, you could try to "feather" the mask. Just set it and forget it. Apr 30, 2024 · WebUI extension for ControlNet. Oct 2, 2022 · Hi! Could someone explain exactly how to use the "mask blur" option and give hints about when to use what values ? Thanks in advance! Inpainting: With the original image and the mask, ADetailer performs inpainting. We would like to show you a description here but the site won’t allow us. py [commands] [options] options: -h, --help show this help message and exit -v, --version show program's version number and exit commands: run run the program headless-run run the program in headless mode batch-run run the program in batch mode force-download force automate downloads and exit benchmark benchmark the program job-list list jobs by status job-create create a Evolved Fork of roop with Web Server and lots of additions - C0untFloyd/roop-unleashed Mar 15, 2024 · I installed WEB UI - FORGE on my computer and attempted to generate an image using Controlnet - Openpose functionality, but Controlnet did not work at all. Any help on how to get them right? Edit: I got it to work, but I'm leaving this post and I'll comment with some additional Feb 19, 2024 · 局部重繪(inpaint)。這是用AI填充塗黑(遮罩)區域的技術,例如給圖片的角色換衣服。或是反過來:讓AI把圖片空白的地方繪製完成(outpaint)。 Apr 1, 2023 · The above is my parameter and the resulting image. Or you can also see the question on the discussion page (Q&A) in the github for an alternate solution. Also, apparently inpainting can be used to remove extra fingers or correct face/a bit of model etc. Many things taking place here: note how only the area around the mask is sampled on (40x faster than sampling the whole image), it's being upscaled before sampling, then downsampled before stitching, and the mask is blurred before sampling plus the sampled image is blend in seamlessly into the original image. It is similar to the Detection Detailer. Resizing image using detection model. Contribute to Mikubill/sd-webui-controlnet development by creating an account on GitHub. I am essentially creating a very basic depth map. Aug 25, 2023 · Civitai | Stable Diffusion models, embeddings, LoRAs and more /region-blocked May 11, 2025 · After that, we should look at the parameter settings. Mask Mode: this changes how the mask works. Feb 12, 2023 · Inpaint upload would be if you create the mask in another program instead of in the web ui, like in photoshop. For backgrounds or fixing skin imperfections I would set it 1. Contribute to uxiost/stable-diffusion-webui-simple development by creating an account on GitHub. This tool leverages a diverse array of models, including RMBG-2. Custom nodes pack for ComfyUI This custom node helps to conveniently enhance images through Detector, Detailer, Upscaler, Pipe, and more. Find "Create Mask of Clothes" in the Extras tab after installing the extension. 3-0. ? How do I rollback? 💥 Updated online demo: . Next, Cagliostro) - Gourieff/sd-webui-reactor-sfw Again, when I say 'max resolution', I'm not talking about the overall size of the image but the size of the inpainting portion. This is great, I just wish someone could have a video tutorial on how to add the web ui to my computer, I use normal Stable Diffusion thanks to a video tutorial sohelp a brother out? Ultimate SD Upscale extension for AUTOMATIC1111 Stable Diffusion web UI Now you have the opportunity to use a large denoise (0. Next generation face swapper and enhancer. Note that if you do your own mask and you feather the edges yourself, you'd probably want to set mask blur to 0 in the web ui. a) Copy Auto-Photoshop plugin url b) Paste the url in auto1111's extension tab and click install c) Make sure the Auto-Photoshop plugin is listed, then click "Apply and Restart UI" May 23, 2023 · hi, recently I tried the diffuser inpaintpipeline and the SD-webui img2img inpaint, I found the result is different. Contribute to diffus-me/sd-webui-facefusion development by creating an account on GitHub. This update brings along several useful features. It also works well with denoising like 0. Feb 11, 2023 · And I don't understand any option at the bottom of screenshot, particularly "Mask Blur", "Masked Content", "Inpaint area" and the worst of all "only masked padding, pixels" (what is that?!). Also, it seems like the 'negative prompt models' are more of a detrimental interference than a help. It basically is like a PaintHua / InvokeAI way of using canvas to inpaint/outpaint. 0 license) Roman Suvorov, Elizaveta Logacheva, Anton Ma Wav2Lip UHQ extension for Automatic1111. Sampling m This is the final release of StableSwarmUI under Stability AI. It behaves not as blur, but as "diverge". Meanwhile here are the init parameters that are available on the Deforum extension: Mar 4, 2024 · Finally, we welcome the Automatic1111 Stable Diffusion WebUI v1. Apr 10, 2023 · Thanks for suggestions from github issues, reddit and bilibili to make this extension better. I wanted to draw attention to this tip because the 'trick' was non-intuitive to me. May 20, 2023 · Stable Diffusion web UI. It guides you through installing the Mosaic outpaint extension, generating an image, and using it to create a mosaic for inpainting. I need a way to cutout my subject, create a new background and paste my subject into that new background. In such degree that I don't understand how it can be called a "blur". I wonder what the differences from these two method? How does it work? The [zoom_enhance] shortcode searches your image for specified target (s), crops out the matching regions and processes them through [img2img]. It leverages Effective Region Mask: By turning it on, you can load a mask for the adaptive range of the control net. It lets you use a soft-edged brush or a gradient image like shown on the github page, compared to how it has been with a 1bit/hard edged mask. bin files, change the file extension from . 0, INSPYRENET, BEN, BEN2, BiRefNet, SDMatte models, SAM, SAM2 and GroundingDINO, while also incorporating a new feature for real-time background replacement and enhanced 🖥️ Run AI Agent in your browser. The original developer will be maintaining an independent copy of this repo, see the Migration Guide here Major Updates added new model downloader utility tab #11 (comment) it can autodownload civitai metadata, and you can also use the civitai metadata importer to load metadata onto an existing model New theme " Punked ", inspired This means it is perfect for creating recognisable shapes, such as letters. The extension will allow you to use mask expansion and mask blur, which are necessary for achieving good results when outpainting and inpainting. The comfyui version of sd-webui-segment-anything. What's THE EASIEST/FASTEST way for me to do this? (Because I have to do this many, many times. But I also use automatic111's webui when I do inpainting. While online tutorials may make it seem effortless, the reality is that mastering Stable Diffusion is a process filled with trial and error, and it requires tweaking to get it just right. Contribute to oaf40/sd-webui-inpaint-mask-tools development by creating an account on GitHub. AUTOMATIC1111 UI extension for creating videos using img2img and ebsynth. apply blur to the detected mask, softening the censor border, if the value is too high the censor Hi, this is an issue with automatic1111's web ui. Jun 29, 2023 · Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. The recent inferences I made were based on just 3 or 4 words in the negative prompt, like 'blur,' 'blurry,' 'bad quality,' 'low quality,' and similar. As for the diffuser have change the part I have masked for unchanging. Original script with Gradio UI was written by a kind anonymopus user. bin to . The prompt used during txt2img Set Inpaint area to Whole picture to keep the coherency; Increase Mask blur as needed Set the Mar 20, 2024 · TLDR This tutorial walks you through the process of outpainting using Stable Diffusion Forge UI, a technique for expanding image borders by adding new elements. An implementation of Depthflow in ComfyUI. Stay tuned for more info. . Mar 5, 2024 · I recently came to an understanding while following a tutorial on inpainting that featured the "Soft Inpainting" option. 💥 Updated online demo: Colab Demo for GFPGAN ; (Another Colab Demo for the original paper model) 🚀 Thanks for your interest in our work. I really hope that I could have bandwidth to rework sd-webui-controlnet, but it requires a huge amount of time. I probably won't be able to answer questions that all the existing tutorials out there already do. 80. Face Editor for Stable Diffusion. Despite adhering to the instructions and utilizing the "Soft Inpainting" checkbox within Forge, it didn't perform as expected—especially when I increased the mask blur, which Jan 2, 2023 · Deforum allows the user to use image and video inits and masks. For your reference, when using the Detailer in Impact Pack, if noise_feather_mask is greater than 0, it automatically applies that internally. At the moment, only attention mode is usable. Now, however it only produces a "blur" when I paint the mask. Apr 1, 2023 · Mask Blur: this changes how much the inpainting mask is blurred before running the prompt. No more color bleeds or mixed features! Link: GitHub (DoesNOTwork with Automatic1111 Webui) Apr 24, 2023 · Set Parameters: Set the following parameters in the Infinite Zoom > Outpaint tab: mask_blur = 0 Denoising Strength = 1 In the Web-UI settings, select Stable Diffusion and adjust the following: Inpainting conditioning mask strength = 1 Noise multiplier for img2img = 1 Uncheck Apply color correction to img2img results to match original colors Aug 24, 2024 · You can approximately achieve a similar effect by selecting through Lasso Tool, feathering the selection with whatever your mask blur is in A1111, inverting the selection (ctrl + shift+ i), applying Average Blur through Filters -> Blur -> Average, copying the color (eyedropper), revert filter and selection inversion and use Paint Bucket to fill Such features will be Forge only. thx. Stable Diffusion web UI. Reactor has built in codeformer and GFPGAN, but all the advice I've read said to avoid them. It also provides a batch size parameter for generated image inpainting, enhancing the AI-powered features. Auto masking and inpainting for person, face, hand. Don't know if you guys have noticed, there's now a new extension called OpenOutpaint available in Automatic1111's web UI. How do you import a mask? IOW, I know I can draw a mask using the built in tool - which sucks - but how do I upload an image so that parts of it are unchanged and other parts are regenerated from nothing? I tried upload a png with transparency bit it did not work. 🛠️ Settings adjustments, such as mask blur and inpainting, are crucial for refining the image without visible seams or artifacts. Contribute to numz/sd-wav2lip-uhq development by creating an account on GitHub. Nov 5, 2022 · I expected the masking feature within the webui to do just that, mask an area. If I make the mask too large, I get big anime eyes. Put them in your "stable-diffusion-webui\models\ControlNet\" folder If you downloaded any . 3. You can draw a mask or scribble to guide how it should inpaint/outpaint. If I make it too small, it doesn't fix much. This is an attempt to port over the regional prompting webui extension over to forge. 21. pth file and place it in the "stable-diffusion-webui\models\ESRGAN" folder. Contribute to facefusion/facefusion development by creating an account on GitHub. Let’s take a look at the key updates right away! Easily generate multiple subjects. Enhance Stable Diffusion image quality, prompt following, and more through multiple implementations of novel algorithms for Automatic1111 WebUI. Industry leading face manipulation platform. This process involves editing or filling in parts of the image based on the mask, offering users several customization options for detailed image modification. python facefusion. I don't think blurriness is a factor of ADetailer vs Swarm's segment, as the only difference there is the mask selector, the method of what happens after is the same to my knowledge (a zoomed in image of the section is regenerated) -- the blur difference would be a factor of model used and/or the Creativity value (aka denoise) Reply reply Jun 5, 2024 · In the transition region where the mask is between black and white, the latent images of the inpainted and the original content blend during sampling to achieve a seamless transition. This helps to avoid sharp edges on the inpainted image. Contribute to IAn2018cs/sd-webui-facefusion development by creating an account on GitHub. The final code should look like this: import tempfile def convert_to_sd (img Jun 9, 2023 · 1. Flux setup. I have attempted to use the Outpainting mk2 script within my Python code to outpaint an image, but I ha 170 votes, 77 comments. You should know the following before submitting an issue. This is a modification. After I run . You can also check out my demo. Add motion blur to videos. ADetailer is an extension for the stable diffusion webui that does automatic masking and inpainting. Also, some advanced features in ControlNet Forge Intergrated, such as ControlNet per-frame mask, will also be Forge only. My very rough understanding is that SAG works by enhancing the self-attention process of the denoiser, hence "Self-Attention Guidance". Aug 22, 2024 · When you apply that and use SetLatentNoiseMask to apply a mask with varying intensities like a Gaussian blur, different levels of denoising are applied. Contribute to ototadana/sd-face-editor development by creating an account on GitHub. I also find it better to use a regular vae encode node and then merge the mask, rather than using 'vae encode for inpainting'. 5) and not spawn many artifacts. Contribute to AUTOMATIC1111/stable-diffusion-webui development by creating an account on GitHub. 5, just set high enough mask blur value (~20) 5. Oct 12, 2024 · I currently have transparent background 1. 1. I haven't been able to replicate the 'only masked area' workflow in comfy. Clip skip: 1 Add motion blur to videos. com Feb 10, 2024 · Stumbled over this new inpainting feature, after using lllyasviel's webui-forge and I mistakenly assumed that it had been developed by him, which is not the case, as he pointed out to me. Apr 19, 2023 · Which API parameters for these setting?Notifications You must be signed in to change notification settings Fork 29k Dec 17, 2024 · For Automatic1111, you need to first install SD Webui patcher extension by just pasting it into the git url section (link provided below) from the Automatic1111's extension tab. Preprocessor (Annotator): Creates a detect map with the specified annotator before adapting the control net. pth. It then blends the result back into your original image. Contribute to lifeisboringsoprogramming/sd-webui-lora-masks development by creating an account on GitHub. 1. ) I've got a project where I'm turning my grade Mar 10, 2023 · disastrouscash on Mar 10, 2023 Hey, I already made this post in a reddit board, but received no help in 3 days, so I'm asking for help here. The last updates from yesterday should make it work. sh and after the 'installing WebUI dependencies' message. Contribute to browser-use/web-ui development by creating an account on GitHub. But suddenly it stopped working. Some key functions of FaceSwapLab include the ability to reuse faces via checkpoints, batch process images, sort faces based on size or gender, and support for vladmantic. Sep 1, 2024 · When i use soft inpainting with a high mask blur value like 40, the example below i just asked for a generic 1girl for testing purposes, on inpainting the prompt was peace sign, the preview result is pretty close but the end result is something completely different. Due to the overwhelming First I'd see if controlnet segments it properly, and if so I'd convert that into an appropriate inpaint mask. There Industry leading face manipulation platform.
hymx qekh cwks qlpx sfjl qiwef rdsocie emae mxqki bghe