Easy Diffusion Changelog

What's new in Easy Diffusion 3.0

Aug 31, 2023
  • ControlNet - Full support for ControlNet, with native integration of the common ControlNet models. Just select a control image, then choose the ControlNet filter/model and run. No additional configuration or download necessary. Supports custom ControlNets as well.
  • SDXL - Full support for SDXL. No configuration necessary, just put the SDXL model in the models/stable-diffusion folder.
  • Multiple LoRAs - Use multiple LoRAs, including SDXL and SD2-compatible LoRAs. Put them in the models/lora folder.
  • Embeddings - Use textual inversion embeddings easily, by putting them in the models/embeddings folder and using their names in the prompt (or by clicking the + Embeddings button to select embeddings visually). Thanks @JeLuF.
  • Seamless Tiling - Generate repeating textures that can be useful for games and other art projects. Works best in 512x512 resolution. Thanks @JeLuF.
  • Inpainting Models - Full support for inpainting models, including custom inpainting models. No configuration (or yaml files) necessary.
  • Faster than v2.5 - Nearly 40% faster than Easy Diffusion v2.5, and can be even faster if you enable xFormers.
  • Even less VRAM usage - Less than 2 GB for 512x512 images on 'low' VRAM usage setting (SD 1.5). Can generate large images with SDXL.
  • WebP images - Supports saving images in the lossless webp format.
  • Undo in the UI - Remove tasks or images from the queue easily, and undo the action if you removed anything accidentally. Thanks @JeLuF.
  • Three new samplers, and latent upscaler - Added DEIS, DDPM and DPM++ 2m SDE as additional samplers. Thanks @ogmaresca and @rbertus2000.
  • Significantly faster 'Upscale' and 'Fix Faces' buttons on the images
  • Major rewrite of the code - We've switched to using diffusers under-the-hood, which allows us to release new features faster, and focus on making the UI and installer even easier to use.

New in Easy Diffusion 2.5.41 (Jul 11, 2023)

  • (beta-only) Fix broken inpainting in low VRAM usage mode.
  • (beta-only) Fix a recent regression where the LoRA would not get applied when changing SD models.
  • Fix a regression where latent upscaler stopped working on PCs without a graphics card.
  • Automatically fix black images if fp32 attention precision is required in diffusers.
  • Another fix for multi-gpu rendering (in all VRAM usage modes).
  • Fix multi-gpu bug with "low" VRAM usage mode while generating images.
  • Fix multi-gpu bug with CodeFormer.
  • Allow changing the strength of CodeFormer, and slightly improved styling of the CodeFormer options.
  • Allow sharing an Easy Diffusion instance via https://try.cloudflare.com/ . You can find this option at the bottom of the Settings tab. Thanks @JeLuF.
  • Show an option to download for tiled images. Shows a button on the generated image. Creates larger images by tiling them with the image generated by Easy Diffusion. Thanks @JeLuF.
  • (beta-only) Allow LoRA strengths between -2 and 2. Thanks @ogmaresca.

New in Easy Diffusion 2.5.24 (Mar 13, 2023)

  • Button to load an image mask from a file.
  • Logo change. Image credit: @lazlo_vii.

New in Easy Diffusion 2.5.15 (Mar 12, 2023)

  • Nearly twice as fast - significantly faster speed of image generation. Code contributions are welcome to make our project even faster: https://github.com/easydiffusion/sdkit/#is-it-fast
  • Support for Stable Diffusion 2.1 (including CPU) - supports loading v1.4 or v2.0 or v2.1 models seamlessly. Just place your SD 2.1 models in the models/stable-diffusion folder, and refresh the UI page. Works on CPU as well.
  • Memory optimized Stable Diffusion 2.1 - you can now use Stable Diffusion 2.1 models, with the same low VRAM optimizations that we've always had for SD 1.4. Please note, the SD 2.0 and 2.1 models require more GPU and System RAM, as compared to the SD 1.4 and 1.5 models.
  • 6 new samplers! - explore the new samplers, some of which can generate great images in less than 10 inference steps!
  • Model Merging - You can now merge two models (.ckpt or .safetensors) and output .ckpt or .safetensors models, optionally in fp16 precision. Details: https://github.com/cmdr2/stable-diffusion-ui/wiki/Model-Merging
  • Intelligent Model Detection - automatically picks the right YAML configuration for known models. E.g. we automatically detect and apply "v" parameterization (required for some SD 2.0 models), and "fp32" attention precision (required for some SD 2.1 models).
  • Fast loading/unloading of VAEs - No longer needs to reload the entire Stable Diffusion model, each time you change the VAE
  • Three GPU Memory Usage Settings - High (fastest, maximum VRAM usage), Balanced (default - almost as fast, significantly lower VRAM usage), Low (slowest, very low VRAM usage). The Low setting is applied automatically for GPUs with less than 4 GB of VRAM.
  • Find models in sub-folders - This allows you to organize your models into sub-folders inside models/stable-diffusion, instead of keeping them all in a single folder.
  • Save metadata as JSON - You can now save the metadata files as either text or json files (choose in the Settings tab).
  • Color correction for img2img - an option to preserve the color profile (histogram) of the initial image. This is especially useful if you're getting red-tinted images after inpainting/masking.
  • Major rewrite of the code - Most of the codebase has been reorganized and rewritten, to make it more manageable and easier for new developers to contribute features. We've separated our core engine into a new project called sdkit, which allows anyone to easily integrate Stable Diffusion (and related modules like GFPGAN etc) into their programming projects (via a simple pip install sdkit): https://github.com/easydiffusion/sdkit/
  • Name change - Last, and probably the least, the UI is now called "Easy Diffusion". It indicates the focus of this project - an easy way for people to play with Stable Diffusion.