Comfyui remove background reddit. 20K subscribers in the comfyui community.
- Comfyui remove background reddit Remove Background + Batch remove. He just updated it yesterday I believe to allow you to control the strength of the noise, haven't updated my install yet to use this, but I for sure will use this when I do. However, I could not achieve this on this new node. 2nd image is an example of my desired results using photoshop manually. I trying to create an image with a character and background around it but, if I use image size 512/512 or something small, the result only include the character and very little background around when I try to change the size of image, for example, I use 1080 and 1920 for Does anyone have any tips for inpainting / replacing backgrounds such that the generated BG image can be informed a bit by the matted foreground elements? I have a situation where I am using 3D renders of birds (just using alpha channels + canny for the silhouettes and discarding the RGB renders from blender) to generate some geese, matting them out, and then using a Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Using "ImageCompositeMasked" to remove the background from the character image and align it with the background image. Its under the Apache 2. Found this workflow for AnimateDiff that handles foreground and background separately. The node utilizes the Remover class Generating separate background and character images. It attempts to create consistent characters with various outfits, poses, and facial expressions, saving the images into sorted output folders. The background to the question: So this is my ComfyUI week. I've tested a lot of different AI rembg methods (BRIA - U2Net - IsNet - SAM - OPEN RMBG, ) but in all of my Install rembg[gpu] (recommended) or rembg, depending on GPU support, to your ComfyUI virtual environment. Pretty sure lowvram and medvram are not only useless, but straight obstruction, at least for If you want something to make a mask for you, Segment Anything will make a mask based on anything you name within the image. Anyway, this is my first reply so i hope it's kinda understandable. 30s/it with A1111 (1 picture @ 768x512) but I'll regularly get 1. image was snagged off of /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app A fully open-source background remover optimized for images with humans An update to my previous SillyTavern Character Expression Workflow. It works if it's the outfit on a colored background, however, the background color also heavily influences the image generated once put through ipadapter. Product backgrounds in ComfyUI youtu Welcome to the unofficial ComfyUI subreddit. A reddit dedicated to the profession of Computer System Administration. Cheers o/ Whats the best background remover for Stable Diffusion ( I Use ComfyUI) Question /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Type Experiments --- Controlnet and IPAdapter in ComfyUI 4. Write better code with AI Security. Keep up the good works, I will keep an eye on this and look forward to any progress. Open menu Open navigation Go to Reddit Home. I'm aiming to do this for a photo client, who may want to be able to swap out backgrounds, but bg's that actually look like they fit with the pose. media which can zoom in and Welcome to the unofficial ComfyUI subreddit. This works pretty well but sometimes the clothing of the person and the background are too similar and the rembg node also removes a chunk of the person. 0 reviews. 646. The easiest way is to have OpenPose set to the background prompt. Any thoughts are most welcome. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, any photoshop users that can help me? i need to remove the background of this pic and apply a filter to make it white instead of black . my subreddits. To do this, I have to select the green color using the dropper tool. Took the dive into AI last month thinking it would be a nice tool for my 3D CAD modelling workflow. Keep green background for video editing later. Testing IC-Light for Background Replacement /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Hi everyone, I am new to Comfyui and I would like to know if there is a way to generate a character on one hand and a background on the other, such as a city, and reach a point where both are merged in ksampler, as much as possible respecting the same lighting and not so much to simply join them. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, with backgrounds in computer science, machine learning, robotics, mathematics, and more. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Belittling their efforts will get you banned. I want to completely remove that green colour and make it transparent. Thought it was really cool, so wanted to share it here! Workflow: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. The only other choice would be to generate backgrounds with no characters in them, and replace backgrounds in a photo manipulation program like photoshop. ; u2netp: Faster processing with slight quality trade-off. Period. Sort by: Best. The first time there was a note saying there was a new comfyUI and I should update from the bat file. Log In / Sign Up; You can either try to train with fewer steps, and likely get a poor character training. Write better code with I'm trying to use IPadapter with only a cutout of an outfit rather than a whole image. Start from a existing picture or generate a product, segment the subject via SAM, generate a new background, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, 17K subscribers in the comfyui community. (None of the generations had background, So I easly placed them all the way i wanted in photoshop, but without the scenario looks fake). 23K subscribers in the comfyui community. The question: in ComfyUI, how do you persist your random / wildcard / generated prompt for your images so that you can understand the specifics of the true prompt that created the image?. Please share your tips, tricks, and workflows for using this software to create your AI art. Hello u/Ferniclestix, great tutorials, I've watched most of them, really helpful to learn the comfyui basics. Any Remove 3/4 stick figures in the pose image. Now you can condition your prompts as easily as applying a CNet! Thanks a lot for this amazing node! I've been wanting it for a while to compare various versions of one image. Once the results are displayed, choose the image to save from [Select images to save] and run the process again. But I found something that could refresh this project to better results with better maneuverability! In this project, you can choose the onnx model you want to use, different models have different PramaLLC - "You can use our BEN model commercially without any problem. Batch generation with I am trying to learn how to change the background of a product image. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, A fully open-source background remover optimized for images with humans. Authored by kwaroran Welcome to the unofficial ComfyUI subreddit. Search your nodes for "rembg". There's something I don't get Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, Sharing my workflow for how to remove 64 votes, 20 comments. Going to your base folder of ComfyUI, where you'll see the folder ComfyUI in it, you'll also see a folder called python_embeded . r/Terraria. But I found something that could refresh this project to better results with better maneuverability! In this project, you can choose the onnx model you want to use, different models have different effects!Choosing the right model for you will give you better results! I like how they both turned out, but, i can't for the life of me wrap my head around a way to composite them all together, exactly how they are now, in a coherent background. Welcome to /r/Tattooremoval. 60+ workflows: https: Remove Background + Batch remove. Created by: Studio ComfyUI: Batch generation with 294 styles from the style chooser. IC-light Workflow for Background and Light changes that allows you to keep object details like "TEXT" Share Add a Comment. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I'm trying to do something very simple: changing the background of an image. And above all, BE NICE. Bikes related discussion is Now I'll go to the Fusion tab and use the Delta Keyer to remove the green background from the second layer. 0. Sorry I am at work right now so I cant screenshot my workflow. Here's the thing, ComfyUI is very intimidating at first so I completely understand why people are put off by it. g. So when the lora is prompted 'background, north, east' and it won't have a seam between what it learned from north and east images. Please use our Discord server instead of supporting a company that acts against its users and unpaid Welcome to the unofficial ComfyUI subreddit. And so on. Generate a fitting Successfully removed the background from an image and turned a suitcase into a mask but the background I want is being applied to the suitcase as a texture. Please share your tips, And then you remove the noise is 5-6 steps. I want to fill the background (preview image) Welcome to the unofficial ComfyUI subreddit. So for example: you hava a person, remove the background, and then use a color fill (white) and make the background black, you have a really good starting point. I'm trying to achieve a selfie look, not a professional photoshoot look. Skip to content. Please jump to content. ComfyUI only saves data available during queuing of the prompt, while useful, these data are not absolute and in many cases won't be able to generate the same image again. Why Rembg Background Removal node for ComfyUI. When I save my final PNG image out of ComfyUI, it automatically includes my ComfyUI data/prompts, etc, so that any image made from it, when dragged back into Comfy, sets ComfyUI back up with all the prompts, and data just like the moment I originally created the original image. There is a lot of missing information here, has this actually been ported to ComfyUI and where is the link to Comfy custom But we will introduce 3 methods used in comfyui to remove background. Also I may notice ComfyUI Examples link on the GitHub. I ddi so. I was able to remove the background and put another environment with the node you posted before. LayerDiffusion does work for img2img, but not very usefully as far as I can see. i cant figure out how to get it back again. I switched to ComyfUI from A1111 last year and haven't looked back, in fact I can't remember the last time I used A1111. ipynb in /workspace. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, Testing IC-Light I was wondering if I can use two checkpoints for ComfyUI. I'll probably stick to ComfyUI from here on out, unless A1111 gets some crazy feature that ComfyUI doesn't have. Get an ad-free experience with special benefits, and directly support Reddit. For some reason, it always shows up with white/red backgrounds (both of which I removed in Photoshop). Within that, you'll find RNPD-ComfyUI. sedans, SUVs, hatchbacks, motor racing, safety etc here on reddit. Please share your tips, tricks, and workflows for using this Created by: yu: What this workflow does This workflow replaces the background of the person with a transparent or a specified color. r/comfyui A chip A close button. ; silueta: Enhanced edge detection for finer details. 00it/s with ComfyUI. The easiest way would be to replace the background and replace it with a different image with the style I want yet I wanted to do that in one go in comfyUi because the fusion would be interesting if it's in one go. Navigation Menu Toggle navigation. I'm trying to figure out if I can use ComfyUI to segment wires/cables in the background of images so that I can remove it. Some models have people in almost all training samples. How to use this workflow Set an image using LoadImage and execute the workflow. Sign in Product GitHub Copilot. I saw there’s a node for images removal called rembg background removal but it doesn’t work with a video. If not, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. In blender Remove light, camera and block. To fix this, what I really want, is to put the "subject" inside prediffusion group, does that sound right? or would that not fix it either and what I need to do is the composition thing? Thanks! Things I tried: using the mask to inpaint (tried this a few ways) to basically REMOVE the woman, then add her back in via composite. Sorry if I seemed greedy, but for Upscale Image Comparing, I think the best tool is from Upscale. I was wondering if video object removal solutions could be possible in ComfyUI, maybe using an inpainting technique. com) I made them and posted them last week ^^. A Anime Background Remover node for comfyui, based on this hf space, works same as AGB extention in automatic1111. Look for models that promote landscapes, or stuff like general models like dreamshaper esque models, anything not focus around photorealistic people, which a solid 50-80 percent are. I was going to make a post regarding your tutorial ComfyUI Fundamentals - Masking - Inpainting. Find and fix Many thanks to the author of rembg-comfyui-node for his very nice work, this is a very useful tool!. 6K. You don't have much memory so Reddit is dying due to terrible leadership from CEO /u/spez. Please keep posted images SFW. I have a background in tech, Python and text editing. To create a seamless workflow in ComfyUI that can handle rendering any image and produce a clean mask (with accurate hair details) for compositing onto any background, you will need to use nodes designed for high-quality image processing and precise masking. io/https://github. Description. png) Txt2Img workflow 13 votes, 45 comments. Open Remove Background Model Rembg Background Removal Node for ComfyUI- you can choose which onnx model to use!. If you manually installed a venv/conda for ComfyUI then it's assumed that you already know how to do this so I'm assuming you're using Windows and a portable ComfyUI. Anyway, thanks for sharing, I'll go play with them now. : You should have installed the three packages torch Pillow GeekyRemB is a sophisticated image processing node that brings professional-grade background removal, blending, and animation capabilities to ComfyUI. 5. Here's an example: Krea, you can see the useful ROTATE/DIMENSION tool on the dogo image i pasted. That's it. I have a workflow where I create an image with one checkpoint, remove the background and in the same workflow I use another checkpoint to create the background and then I merge the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which /r/StableDiffusion is back open after the protest of Reddit killing open API access I am very well aware of how to inpaint/outpaint in comfyui - I use Krita. The only commercial piece is the BEN+Refiner but the BEN_BASE is perfectly fine for commercial use. I have yet to find anything that I could do in A1111 that I can't do in ComfyUI including XYZ Plots. I know, it’s counterintuitive, but since the background prompt applies to the full picture, so does OpenPose. *PLEASE READ THE PINNED POST* This sub is intended to discuss laser tattoo removal. That would give you your Xenomorph and your lady with the same background, or they would blend together, I think. I've also used comfyui to do a style transfer to videos and images with our brand style. I want to create a character with animate anyone and background with svd. upvote r/Terraria. Damn almost there I pull out a Remove Background node and Image Bound node from WAS, but the WAS Remove BG one can't output the mask that's where I stop. 22K subscribers in the comfyui community. Tried to remove it a few days ago, and to my surprise, everything is almost twice as fast as before (iterations on basic process went from 3. Although the goal is to create wallpapers, it can really be used to expand anything, in any direction, by any amount. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the AI-Powered Removal Using Multiple Models: . the current workflow I have uses a mask that works and it changes the background, but the edges of the product looks out of place just looks like a bad photoshop. media. EDIT: There is fill it with red (255-0-0) on black background, select channel "red". seen people say comfyui is better than A1111, and gave better results, so wanted to give it a try, but cant find a good guide or info on how to install it on an AMD GPU, with also conflicting resources, like original comfyui github page says you need to install directml and then somehow run it if you already have A1111, while other places say you need miniconda/anaconda to run We are sound for picture - the subreddit for post sound in Games, TV / Television , Film, Broadcast, and other types of production. Grab the I'm looking for a workflow (or tutorial) that enables removal of an object or region (generative fill) in an image. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. More info: 13K subscribers in the comfyui community. And then I check youtube videos on topic. Positive prompts with wide lenses, detailed background, 35mm, etc. Tried both Background Detail and hotarublurbk with no effect. 47 votes, 19 comments. A lot of people are just discovering this Had been looking for a good reliable solution for use in ComfyUI and stumbled on the https: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Testing IC-Light for easier life with ComfyUI remove background. Because the detection and removal is meant to be automatic, muting Nodes: Remove Image Background (abg). 5" to reduce I've used Comfyui to rotoscope the actor and modify the background to look like a different style living room, so it doesn't look like we're shooting the same location for every video. It takes an image tensor as input and returns two outputs: the image with the background removed and a mask. Learn from community insights and improve your experience. Contribute to Jcd1230/rembg-comfyui-node development by creating an account on GitHub. While I was kicking around in LtDrData's documentation today, I noticed the ComfyUI Workflow Component, which allowed me to move all the mask logic nodes behind the scenes. huggingface. I do believe it Welcome to the unofficial ComfyUI subreddit. I want to generate two images simultaneously, one of a background, and one of a character, and then I want to inpaint the character onto the background. Where things got a bit crazy was trying to avoid having the ksampler run when there was nothing detected, because ComfyUI doesn't really support branching workflows, that I know of. It combines AI-powered processing The new queue preview looks much better, and gives the option to delete items, even if it does not work because the generated image remains there. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the LINK TO THE WORKFLOW IMAGE. io/ ? Thanks. Dig, fight, explore, build /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ; isnet-general-use: Balanced performance for various subjects. Welcome to the unofficial ComfyUI subreddit. I just tried it in Comfy, and it doesn't make the background alpha channel transparent unless the background of the source image was already a neutral gray. My only current issue is as follows. #HappyAITime with @handResolver custom_nodes requirementhttps://ltdrdata. If you want achieve perfect 20K subscribers in the comfyui community. Testing IC-Light for Background Replacement /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, I usually get around 1. 7 for example). I am trying to generate random portraits with dynamic prompts and I am removing the background with the rembg node. 7. But hear me out, that's not enough at all. And now you can add https://github. Open comment /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. Also, the Nodes Library There’s also a website that removes background for free and it’s 100x better than the stable diffusion Wendi version. total mess. The mask is derived from the alpha channel of the processed image. The idea was to make the sections of background on different training images more consistent. Thanks, I used the remove background node and it did the same, well maybe it just made the background black. * Dialog / Dialogue Editing * ADR * Sound Effects / SFX * Foley * Ambience / Backgrounds * Music for picture / Soundtracks / Score * Sound Design * Re-Recording / Mix * Layback * and more Audio-Post Audio Post Editors Sync Sound Pro Tools I agree with anybunnywww. Please share your tips, tricks, and Also I read manual installation and troubleshooting section. So the object fits better. 2) The edges are very bad. But it is basic rembg remove background then use as mask, then invert mask, grow mast and blur mask then onto regular inpainting workflow. Your best bet is model and prompt. Character Background Swap, Color Correction & more!) Share Add a Comment. OR you can remove images with backgrounds that are very similar to each other. :)" 22K subscribers in the comfyui community. ; u2net_human_seg: Optimized for human subjects. It works also with "green" (0 I have began from scratch, starting to read the OFFICIAL manuals, on how to operate the ComfyUI from the official ComfyUI repo. New to ComfyUI. Get app Get the Reddit app Log In Log in to Reddit. E. I am very new to comfyui and sd. Hello I was wondering if anyone knows if it is possible to do video object removal using Comfyui like at https://anieraser. I found chflame163/comfyUI_LayerStyle which dose what I want with the Image Blend node but it only works for one image at a time. It's just a node that removes background. It always blurs the background. In photoshop I would just select color range, eyedrop the color, select fuzzy range and it would remove it completely. 1. The first is a tool that automatically remove bg of loaded images (We can do this with WAS), BUT it also allows for dynamic repositionning, the way you would do it in Krita. This is a workflow for creating SillyTavern characters. From there on when I turn on Comfyui just looks for missing files downloads them and than does it again. Transparent background r/StableDiffusion /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I've been using A1111 for over a year now and I've never seen it flip to it/s. This is a comfy UI implementation of my original tutorial for using mosaic tiles to expand an image and create wallpapers. github. Oh wait, yes I know ComfyUI saves the whole workflow and values as JSON in the image. I've tried using ComfyUI node for background removal, implementing InSPyreNet the best method up to date - john-mnz/ComfyUI-Inspyrenet-Rembg. /r/StableDiffusion is back open after the protest of Reddit killing open API Swapping out the background but keeping the subject can easily be done with the remove background and compose image nodes in WAS Suite for example. . I want to draw a seed image or two on a transparent background, generate a solid background, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial Like, you mention you don't want the background in the images, and there are resources to remove backgrounds, is that doable but too time consuming, or would that create weird artifacts or distortions or something? Custom node: LoRA Caption in ComfyUI : comfyui (reddit. 20K subscribers in the comfyui community. com/huchenlei/ComfyUI Is there any way to remove the background and make it transparent using a mask? I want to remove the background with a mask and then save it to my computer as a . The video has a man walking and a white background. I’d like to know the best way to composite a studio shot of my subject to an AI generated background (that may I already have), considering both the solution for the BG (starting from a prompt or starting from an image). How to remove background in ComfyUI? Hi. I'll be going through this as well. Thanks for the information I will look This way you automate the background removing on video. ) to achieve good results without little to no background noise. what node is fastest way to remove background? looking for something efficient. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, Is there any way to export an MP4 animation with a transparent background? Works well for masking and removing BG, especially if plain background in prompts. 2 to 5. ; u2net_cloth_seg: Specialized for clothing segmentation. Please share your tips, then you could prompt "foreground female face" or use a Remove Background node before segmenting. Please ComfyUI node for background removal, implementing InSPyReNet. Rembg is a tool to remove images background. That will only run Comfy. I always get this slight green colour in the background. Please do not ask how sessions you need for removal, but feel free to share your concerns, progress, and frustrations! Good luck! So with the separated subject/background fields I thought this workflow had found a solution to fix this but that's not really what those fields do right. Please repost it to the OG question instead. Does it ring a bell? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, 18K subscribers in the comfyui community. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Discover helpful tips for beginners using ComfyUI on StableDiffusion. img2img with Low Denoise: this is the simplest solution, but unfortunately doesn't work b/c significant subject and background detail is lost in the encode/decode process Hi everyone guys. I understand how outpainting is supposed to work in comfyui Testing IC-Light for Background Replacement /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, The solution is - don't load Runpod's ComfyUI template Load Fast Stable Diffusion. Applying "denoise:0. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I am open-sourcing my human segmentation dataset for creating a truly open background remover model. Except for the background, that should be Probably not what you want but, the preview chooser\image chooser node is a custom node that pauses the flow while you choose which image (or latent) to pass on to the rest of the workflow. It was daunting at first but settled on StableDiffusionXL and ComfyUI. And to make Spider-Man blend in even better with the background, I'm going to increase the threshold of the Matte tab to remove some of Spider-Man's edges. We are here to give you support and information on your tattoo removal journey. It keeps going for more or less and hour than suddenly writes: press and key to continue , there is crashes. png file, selecting Welcome to the unofficial ComfyUI subreddit. But to do this you need an background that is stable (dancing room, wall, gym, etc. thank you! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. How i can remove background in comfy? WAS (custom nodes pack) have node to remove background and work fantastic. Hi, I'm new to comfyui and not to familier with the tech involved. Notice the flip from s/it to it/s. The ComfyUI-Wiki is an online quick reference manual that serves as a guide to ComfyUI. I've got it hooked up in an SDXL flow and I'm bruising my knuckles on SDXL. As you can see his little grin has disappeared and his face is not the same. Should be there from some of Intro 3 method to remove background in ComfyUI,include workflows. u2net: General-purpose, high-quality background removal. But got the result I wanted. Expand user menu Open settings menu. I'm trying to figure this out, but I can't find anyone else making a workflow like this. Not sure how much difference the ipadapter would make anyway. Go to Edit mode and select all -> by pressing "A" Everything will turn yellow as it will select all the polygons -> Press "U" and choose Unwrap -> Go back to object mode Not at all it's true that bria removes the background by masking but I was referring to the background replacement which is done automatically here, cuz bria only remove it, if you want to replace it you we'll need to inpaint the background again Welcome to the unofficial ComfyUI subreddit. Import Model -> select your obj file. Generate one character at a time and remove the background with Rembg Background Removal Node for ComfyUI. I’m using comfyui and i need to remove the background of a video animation i generated. I may check some simple workflows there and links to ComfyUI blog and other useful resources. ComfyUI Tutorial: Background and Light control using IPadapter youtu. If there is one figure and you want to remove the background any iPhone will also do. Please share your tips, windows does things in the background to mange memory and swap. Now, I have basically hit the wall of what I can do with the ComfyUI, because for me it feels that more complex noodles Visio style progress I made, it does not actually make the image better, rather than a little "different". Do you have any ideas to do this? How can I apply it? Maybe it would be better to wait for the development process? I see many web sites offering sophisticated video object removal solutions. 25K subscribers in the comfyui community. Skip to main content. Any automobile that moves on four wheels can be discussed here. com/WASasquatch/ Preparation work (not in comfyui) - Take a clip and remove the background (can be made with any video editor which rotobrush or, as in my case, with RunwayML) - Extract the frames from the clip (in my case with ffmpeg) - Copy the frames in the corresponding input folder (important, saved as 000XX. Run all the cells, and when you run ComfyUI cell, you can then connect to 3001 like you would any other stable diffusion, from the "My Pods" tab. my menu node or whatever its called is missing when ever i launch comfyui. ComfyFlow: From comfyui workflow to webapp, in seconds. For my task, I'm copy-and-pasting a subject image (transparent png) into a background, but then I want to do something to make it look like the subject was naturally in the background. But that may not happen anyway. i used bria ai for the background removal Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app I'm looking for a way to set up a generation pipeline that reliably provides subjects on a flat, solid white background. So you have the preview and a button to continue the workflow, but no mask and you would need to add a save image after this node in your workflow. workflow currently removes background from animate anyone video with rembg node and then I want to just layer frame by frame on to the svd frames. 0 license. As you can see we can understand a number of things Krea is doing here: Welcome to the unofficial ComfyUI subreddit. Just select the primary image and it will copy it out with the background made transparent alpha layer (if that is correct term). I want to create an image of a character in 3D/photorealistic while having the background in painting style. The available models are: u2net (download, source): A pre-trained model for general use This ComfyUI workflow lets you remove backgrounds or replace backgrounds which is a must for anyone wanting to enhance their products by either removing a background or replacing the background with something new. ComfyUI-DragNUWA: DragNUWA enables users to manipulate backgrounds or objects within images directly, and the model seamlessly translates these actions into camera movements or object motions, generating the corresponding video I use InvokeAI which has a function to add latent noise to masks during img2img. Repeat the two previous steps for all characters. So, I've masked the background and generated the image but I've 2 problems: also what's outside the mask changed (so the guy in the foreground). Many thanks to the author of rembg-comfyui-node for his very nice work, this is a very useful tool!. ) Hi there. Thank you guys! (I've tried negative prompts including blurry, bokeh, depth of field, etc. The only references I've been able to find makes mention of this inpainting model, using raw python or auto1111. PRO-TIPS: Avoid overlap between boxes. 0. I had some success with controlnet, but I don't want to prescribe any specific shapes that the image should take: it should be whatever the model determines as fitting. A lot of people are just discovering this technology, and want to show off what they created. I took a picture with the product in the center and Now I want to change the background to anything I want. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. It might be a couple extra steps, but if you really wanted to combine the backgrounds, you could use Photoshop to remove the backgrounds from the input images, and then put a similar background in each. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, SECOND UPDATE - HOLY COW I LOVE COMFYUI EDITION: Look at that beauty! Spaghetti no more. get reddit premium. I was hoping to get the transparency like shown in the forge video, so far seems to not have that for comfy just background or foreground removal. Testing IC-Light for Background Replacement /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You can select a transparent or solid color Preparation work (not in comfyui) - Take a clip and remove the background (can be made with any video editor which rotobrush or, as in my case, with RunwayML) - Extract the frames from the clip (in my case with ffmpeg) - Copy Welcome to the unofficial ComfyUI subreddit. gfskei cevb sposf koklopm dnbf iten dyovxdn qrh wapncx ycburl
Borneo - FACEBOOKpix