Comfyui workflow png example reddit


Comfyui workflow png example reddit. You can use () to change emphasis of a word or phrase like: (good code:1. Remove 3/4 stick figures in the pose image. 157 votes, 62 comments. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Hello, I'm wondering if the ability to read workflows embedded in images is connected to the workspace configuration. Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it. You can construct an image generation workflow by chaining different blocks (called nodes) together. Increasing the sample count leads to more stable and consistent results. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. Breakdown of workflow content. Where can one get such things? It would be nice to use ready-made, elaborate workflows! For example, ones that might do Tile Upscle like we're used to in AUTOMATIC 1111, to produce huge images. If I drag and drop the image it is supposed to load the workflow ? I also extracted the workflow from its metadata and tried to load it, but it doesn't load. Comfy Workflows Comfy Workflows. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. K12sysadmin is for K12 techs. I'm facing a problem where, whenever I attempt to drag PNG/JPG files that include workflows into ComfyUI—be it examples The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably used before. A lot of people are just discovering this technology, and want to show off what they created. First of all, sorry if this has been covered before, i did search and nothing came back. I cant load workflows from the example images using a second computer. Apparently the dev uploaded some version with trimmed data But generally speaking, workflows seen on GitHub can also be used. From what I see in the ControlNet and T2I-Adapter Examples, this allows me to set both a character pose and the position in the composition. ComfyUI could have workflow screenshots like example repo has to demonstrate possible usage and also variety of extensions. Also, if this is new and exciting to you, feel free to post ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. This makes it potentially very convenient to share workflows with other. So OP, please upload the PNG to civitai. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. ai/profile/neuralunk?sort=most_liked. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. If you can't see that button, you need to check the 'enable dev mode options'. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality) . com ComfyUI Examples. 8). Hello there. EDIT: For example this workflow shows the use of the other prompt windows. u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. Svelte is a radical new approach to building user interfaces. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. png) 29 comments See full list on github. 2) or (bad code:0. https://youtu. Really chaotic images or images that actually benefit from added details from the prompt can look exceptionally good at ~8. I'll do you one better, and send you a png you can directly load into Comfy. Example: Starting workflow. It works by converting your workflow. You can then load or drag the following image in ComfyUI to get the workflow: This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. 0 and ComfyUI to explore how doubling the sample count affects performance, especially on higher sample counts, seeing where the image changes relative to the sampling steps. A text file with multiple lines in the format "emotionName|prompt for emotion" will be used. Plus there a ton of extensions which provide plenty ease of use cases. Im trying to do the same as high res fix, with a model and weight below 0. Ignore the prompts and setup I think perfect place for them is Wiki on GitHub. txt containing a prompt describing the outfit in outfit1. Generate one character at a time and remove the background with Rembg Background Removal Node for ComfyUI . The problem I'm having is that Reddit strips this information out of the png files when I try to upload them. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. But the workflow is dead simple- model - dreamshaper_7 Pos Prompt - sexy ginger heroine in leather armor, anime Neg Prompt - ugly Sampler - euler steps - 20 cfg - 8 seed - 674367638536724 That's it. I found it very helpful. Please keep posted images SFW. So, i added reverse image search that queries a workflow catalog to find workflows that produce similar looking results. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. Posted by u/Kinfolk0117 - 37 votes and 7 comments A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a grass field Secondary Prompt ----- A list of keywords derived from the main prompts, at the end references to artists Example: cat, hat, grass field, style of [artist name] and [artist name] Style and References Welcome to the unofficial ComfyUI subreddit. - If the image was generated in ComfyUI, the civitai image page should have a "Workflow: xx Nodes" box. here i just use: futuristic robotic iguana, extreme minimalism, white porcelain robot animal, details, build by Tesla, Tesla factory in the background I'm not using breathtaking, professional, award winning, etc, because it's already handled by "sai-enhance" I totally agree. I conducted an experiment on a single image using SDXL 1. comfy uis inpainting and masking aint perfect. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. And the documentation uses a highly technical language, with no examples to make it worse. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. The workflow is kept very simple for this test; Load image Upscale Save image. I had to place the image into a zip, because people have told me that Reddit strips . Instead, I created a simplified 2048X2048 workflow. And above all, BE NICE. Most workflows you see on GitHub can also be downloaded. Hi Antique_Juggernaut_7 this could help me massively. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. As a pogrammer, the workflow logic should be relatively easy to understand, but the function of each node cannot be inferred by simply looking at its name. png) Give it a folder of OpenPose poses to iterate over Create a list of emotion expressions. Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting point for using ControlNet: View Now: Inpainting workflow: A great starting I think it was 3DS Max. hey guys, i always love seeing a cool image online and trying to reproduce it, but trying to find the original method or workflow is troublesome since google‘s image search just shows similar looking images. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Any ideas on this? Give it a folder of images of outfits (with, for example, outfit1. I'm using the ComfyUI notebook from their repo, using it remotely in Paperspace. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. Unfortunately, Reddit strips the workflow info from uploaded png files. . A1111 has great categories like Features and Extensions that simply show what repo can do, what addon out there and all that stuff. No attempts to fix jpg artifacts, etc. This was really a test of Comfy UI. be/ppE1W0-LJas - the tutorial. We would like to show you a description here but the site won’t allow us. 0 version of the SDXL model already has that VAE embedded in it. 5 from 512x512 to 2048x2048. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. com or https://imgur. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. png" in the file list on the top, and then you should click Download Raw File, but alas, in this case the workflow does not load. Just started with ComfyUI and really love the drag and drop workflow feature. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. Hope you like some of them :) Workflow. io/ComfyUI_examples/flux/flux_schnell_example. Click this and paste into Comfy. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Using just the base model in AUTOMATIC with no VAE produces this same result. Ending Workflow. Share, discover, & run thousands of ComfyUI workflows. I generated images from comfyUI. K12sysadmin is open to view and closed to post. Here are approx. This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Here is the workflow for ComfyUI updated to a folder on google drive with both json and png of some of my workflows example by @midjourney_man - img2vid No refiner. And you need to drag them into an empty spot, not a load image node or something. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. The png files produced by ComfyUI contain all the workflow info. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". Aug 2, 2024 · You can then load or drag the following image in ComfyUI to get the workflow: This image contains the workflow (https://comfyanonymous. 168. I have a client who has asked me to produce a ComfyUI workflow as backend for a front-end mobile app (which someone else is developing using React) He wants a basic faceswap workflow. com and then post a link back here if you are willing to share it. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. 1:8188 but when i try to load a flow through one of the example images it just does nothing. But reddit will strip it away. Flux Schnell is a distilled 4 step model. Upcoming tutorial - SDXL Lora + using 1. true. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. 5 noise, decoded, then saved. For your all-in-one workflow, use the Generate tab. That's because the base 1. To add content, your account must be vetted/verified. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. Belittling their efforts will get you banned. pngs of metadata. Just my two cents. hopefully this will be useful to you. 1 or not. Welcome to the unofficial ComfyUI subreddit. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. but mine do include workflows for the most part in the video description. Otherwise, please change the flare to "Workflow not included" First of all, sorry if this has been covered before, i did search and nothing came back. 0. I can load the comfyui through 192. I can load workflows from the example images through localhost:8188, this seems to work fine. But for a base to start at it'll work. Need help with FaceDetailer in ComfyUI? Join the discussion and find solutions from other users in r/StableDiffusion. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. Please share your tips, tricks, and workflows for using this software to create your AI art. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. Those images have to contain a workflow, so one you've generated yourself for example. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. And my workflow itself for something like SDXL with Refiner upscaled to 4kx4k is super simple. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to - If the image was generated in ComfyUI and metadata is intact (some users / websites remove the metadata), you can just drag the image into your ComfyUI window. About a week or so ago, I've began to notice a weird bug - If I load my workflow by dragging the image into the site, it'll put the wrong positive prompt. For example I just glance at my workflows and pick the one that I want, drag and drop into ComfyUI and I'm ready to go. This repo contains examples of what is achievable with ComfyUI. There is the "example_workflow. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. Each time I do a step, I can see the color being somehow changed and the quality and color coherence of the newly generated pictures are hard to maintain. OP probably thinks that comfyUI has the workflow included with the PNG, and it does. My ComfyUI workflow was created to solve that. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. I tried to find either of those two examples, but I have so many damn images I couldn't find them. json files into an executable Python script that can run without launching the ComfyUI server. github. I have a workflow with this kind of loop where latest generated image is loaded, encoded to latent space, sampled with 0. The sample prompt as a test shows a really great result. gewv mebp gowxm nqehk nze lfaoo kftno tosyb eqcaw iglz