Ceci est une ancienne révision du document !
I have mentioned several times about using prompts that are available without really defining how you would do it. So, let me give an example. Go to: https://civitai.com/ which is a location you likely visit to get Models, and look at the Images. I selected: https://civitai.com/images/7287702 and copied the positive (Photo of artistic stone cup with 3D carvings, little mause theme with forest background, decorated with amber accents, masterpiece of art, visually stunning, intricate details, sharp focus, 55mm f/ 1.8 lens, depth of field, natural daylight) and negative (blurry, painting, drawing, sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label, (worst quality, low quality, normal quality:2),) prompts. I changed the Positive prompt to include possum and not mause (mouse). I used both the sd_xl_base_1.0.safetensors and childrensStories_v1SemiReal.safetensors models and got two different example images: Both are impressive enough but do not accurately represent possums. General models are unlikely to have much of a possum photographic base and you might have to create your own if needed. This simple technique of borrowing prompt information will help newer and even seasoned Stable Diffusion users to go in the image direction needed.
One other time saver and space saving option with ComfyUI is to have it point to the models you have already loaded in Automatic1111 for example (unneeded if you started SD using the ComfyUI version). In your ComfyUI folder you should find a file named: extra_model_paths.yaml.example. Remove the.example text from the file name making it a yaml file and edit it to reflect your actual Automatic1111 folder and ComfyUI should then look to that folder for models. The base_path: path/to/stable-diffusion-webui/ line is what needs to be changed appropriately and then the new file saved to effect this change. But getting back to the purpose of this article, we want to start using ComfyUI. We might first want to add another model or two. You can do this by opening the ComfyUI Menu Manager which we added previously. After selecting the Install Models button, a list of over 300 options becomes available as shown. You can use the filter to show all, installed or not-installed. In my experience, you may need to wait after selecting a model and then restart the system and look again to make sure it’s installed.
Many up-scaling options also exist as optional models. When installed they may be found in the Models/Upscale models in the ComfyUI models folders depending on file type (pth or safesensor) as noted in the type column. You may want to install one or more of those available. You can use the search option for upscale related models. Next we will learn how to build our own workflow. We could start from scratch with the Queue Prompt and clear the current workflow using the Clear button, however you should always consider if you may want to re-use the present workflow. If so, you can select the Save button and use an appropriate name for the workflow which is saved as a json file. In our case we will start with the default workflow or whichever you may have that works with the ultimate goal of making something that looks like the provided workflow. Initially (shown left) it looks much more complex than what we have done previously but it’s essentially two workflows connected. To get the feel of it, right click somewhere on the workflow and select Add Node, then Image and Upscale for example. You will find over a hundred Node options, a bit mind boggling at first. If you started with a blank page you could add each node as found in your default but it’s easier to begin with something and simply modify it. Each node can be moved around by selecting it and using your mouse. You can also copy each (Ctrl+C) and (Ctrl+V) to paste or more importantly (Ctrl+Shift+V) will not only paste a copy of the node but keep all the same options and connections. (Make sure you don’t move the mouse as you paste as you might get a hundred copies pasted.)
You will want to duplicate the KSampler node in that manner. Be careful to move each node so you can see it’s inputs and outputs, sometimes holding the Ctrl button down and using the scroll wheel on your mouse to adjust the size makes this placement easier. The original Latent Image output from the KSampler Node went to the VAE Decode node to the Preview Image node or Save Image node. (The lower right corner of image nodes can be pulled down to make space for the image.) That converts or decodes the latent image which is not a viewable image into a standard image. Likewise the Latent Image output from the second KSampler node does that same thing except in between it goes to an Upscale Image node. What’s happening is the initial Prompt information is going to two KSamplers with one processing at 512 x 512 and the other 2048 x 2048. The problem is that while the larger image is larger, it contains no more information than the smaller image. But this ComfyUI process is the same regardless of purpose. Next time we will produce a larger image with more information which gives more detail.