InvokeAI Changelog

What's new in InvokeAI 4.2.1

May 13, 2024
  • Update INSTALL_REQUIREMENTS.md - 'linux only' under AMD for SDXL. by @gogurtenjoyer in #6329
  • fix: fix seamless by @blessedcoolant in #6344
  • fix(ui): CA processor cancellation by @psychedelicious in #6336
  • feat(ui): protect against t2i adapters with incompatible image dimensions by @psychedelicious in #6342
  • fix(ui): jank in depthanything model size select by @psychedelicious in #6335
  • fix(ui): use translations for canvas layer select by @psychedelicious in #6357
  • fix(ui): disable listening on CA and II layers by @psychedelicious in #6332
  • fix(api): model cover image lost by @psychedelicious in #6337
  • fix(ui): invoke button shows loading while queueing by @psychedelicious in #6359
  • ui: translations update from weblate by @weblate in #6245
  • feat(backend): fix nsfw checker catch-22 by @psychedelicious in #6360
  • chore: v4.2.1 by @psychedelicious in #6362

New in InvokeAI 4.2.0 (May 11, 2024)

  • Enhancements:
  • Control Layers
  • Add TCD scheduler @l0stl0rd
  • Image Viewer updates -- You can easily switch to the Image Viewer on the Generations tab by tapping the Z hotkey, or double clicking on any image in the gallery.
  • Major Changes:
  • Also known as the "who moved my ?" section, this list details where certain features have moved.
  • Image to Image: The Image to Image pipeline can be executed using Control Layers by adding an Initial Image layer.
  • Control Adapters and IP Adapters: These have been moved to the Control Layers tab -- with the added benefit of being able to visualize your control adapter's processed images easily!
  • Fixes:
  • Fixed inpainting models on canvas @dunkeroni
  • Fixed IP Adapter starter models
  • Fixed bug where temp files (tensors, conditioning) aren't cleaned up properly
  • Fixed trigger phrase form submit @joshistoast
  • Fixed SDXL checkpoint inpainting models not installing
  • Fixed installing models on external SSDs on macOS
  • Fixed Control Adapter processors' image size constraints being overly restrictive

New in InvokeAI 4.2.0 Beta 2 (May 7, 2024)

  • Changes since v4.2.0b1:
  • Control Layer masks are cached, reducing time spent outside denoising.
  • Fixed viewer getting stuck when spamming the toggle hotkey
  • Fixed viewer show/hide logic
  • Viewer button more obviously a button
  • Do not run HRO when using an initial image
  • Fixed next/prev buttons getting stuck
  • Fixed upscaling while on canvas tab saves to gallery
  • Snap to canvas bounds with rect tool
  • Perf enhancements in control layers canvas
  • Settings/Control Layers tabs look like tabs
  • Close viewer when adding RG layer
  • Fix auto-switch to viewer on new image
  • Control Layers tab now shows total layer count, not just "valid" layer count
  • Internal: bump all UI deps

New in InvokeAI 4.2.0 Beta 1 (May 3, 2024)

  • Initial image support in Control Layers, no more dedicated tab
  • Tabs renamed to Generation, Canvas, Workflows, Models and Queue
  • Refactored internal handling of control layers, which fixes all reported UI errors
  • T2I Adapter support in Control Layers
  • FF v125 bug fixed
  • Add TCD scheduler @l0stl0rd
  • Image viewer updates
  • Fixed Control Adapter processors' image size constraints
  • Metadata recall for Control Layers
  • Many small and not-particularly-memorable bugfixes

New in InvokeAI 4.2.0 Alpha 4 (Apr 30, 2024)

  • "Regional Control" -> "Control Layers"
  • Control Adapters supported in Control Layers
  • Updates to Control Layers UI
  • Fixed SDXL Checkpoint Inpainting models
  • Fixed installing models on external SSDs on macOS

New in InvokeAI 4.2.0 Alpha 3 (Apr 25, 2024)

  • Updates to the Regional Control UI
  • Fixed inpainting models on canvas
  • IP Adapter starter models fixed
  • Fixed bug where temp files (tensors, conditioning) aren't cleaned up properly
  • Fixed trigger phrase form submit @joshistoast

New in InvokeAI 4.2.0 Alpha 2 (Apr 23, 2024)

  • Fix: IP Adapter Method having incorrect informational popover
  • Re-enable app shutdown actions
  • Feat(ui): regional prompting
  • Feat(ui): regional prompting followups
  • Feat(ui): regional prompting followups 2
  • Feat(ui): regional prompting followups 3
  • Chore: v4.2.0a1
  • Feat(ui): regional prompting followups 4
  • Fix(ui): disabled ip adapters applied to regional contro

New in InvokeAI 4.1.0 (Apr 18, 2024)

  • Enhancements:
  • Backend and nodes implementation for regional prompting and regional IP Adapter (UI in v4.2.0)
  • Secret option in Workflow Editor to convert a graph into a workflow. See #6181 for how to use it.
  • Assortment of UI papercuts
  • Favicon & page title indicate generation status @jungleBadger
  • Delete hotkey and button work with gallery selection @jungleBadger
  • Workflow editor perf improvements
  • Edge labels in workflow editor
  • Updated translations @Harvester62, @symant233, @Vasyanator
  • Updated docs @sarashinai
  • Improved torch device and precision handling
  • Fixes:
  • multipleOf for invocations (for example, the Noise invocation's width and height have a step of 8)
  • Poor quality "fried" refiner outputs
  • Poor quality inpainting with gradient denoising and refiner
  • Canvas images appearing in the wrong places
  • The little eye defaulting to off in canvas staging toolbar
  • Premature OOM on windows (see shared GPU memory FAQ)
  • ~1s delay between queue items
  • Wonky model manager forms navigating away from UI @clsn
  • Invocation API:
  • New method to get the filesystem path of an image: context.images.get_path(image_name: str, thumbnail: bool) @fieldOfView
  • Internal:
  • Improved knip config @webpro
  • Updated python deps @Malrama

New in InvokeAI 4.0.2 (Apr 4, 2024)

  • fix(ui): add default coherence mode to generation slice migration by @psychedelicious in #6109
  • docs: fix broken link by @psychedelicious in #6116
  • fix(ui): cancel batch status button greyed out by @psychedelicious in #6121
  • IP-Adapter Safetensor Support by @blessedcoolant in #6041
  • feat(mm): include needed vs free in OOM by @psychedelicious in #6111
  • add some test IDs for accordion targeting by @maryhipp in #6126
  • fix(mm): do not rename model file when model record is renamed by @psychedelicious in #6113
  • Update probe to always use cpu for loading models by @brandonrising in #6128
  • feat(mm): restore missing models by @psychedelicious in #6118
  • fix(ui): fix model name overflow by @psychedelicious in #6114
  • feat(installer): remove extra GPU options by @psychedelicious in #6119
  • fix(config): fix find_root to use venv parent if no CLI arg or env var by @psychedelicious in #6115
  • docs: update 020_INSTALL_MANUAL.md, remove conda section by @psychedelicious in #6131
  • fix: unicode errors during install or app startup by @psychedelicious in #6133
  • chore: v4.0.2 by @psychedelicious in #6134

New in InvokeAI 4.0.1 (Apr 2, 2024)

  • 4.01 Fixes:
  • Minor updates that resolve performance issues on the canvas.
  • Some installation/updating fixes to improve experience.

New in InvokeAI 4.0.0 (Apr 2, 2024)

  • What's Changed:
  • fix(ui): do not provide auth headers for openapi.json by @maryhipp in #5726
  • ui: translations update from weblate by @weblate in #5736
  • ui: translations update from weblate by @weblate in #5743
  • add latent-upscale to communityNodes.md by @gogurtenjoyer in #5728
  • updated tooltip popovers by @chainchompa in #5751
  • ui: translations update from weblate by @weblate in #5752
  • ui: translations update from weblate by @weblate in #5765
  • ui: translations update from weblate by @weblate in #5788
  • Update communityNodes.md by @skunkworxdark in #5802
  • ui: translations update from weblate by @weblate in #5823
  • chore: merge next by @psychedelicious in #5838
  • feat: automated releases via github action by @psychedelicious in #5839
  • Fix problem of all installed models being assigned "" by @lstein in #5841
  • Tidy the attention code (in preparation for regional prompting) by @RyanJDick in #5843
  • ci: fix workflows by @psychedelicious in #5854
  • Remove attention map saving by @RyanJDick in #5845
  • Make model key assignment deterministic by @lstein in #5792
  • fix(canvas): use a corrected gradient mask for canvas pasteback by @dunkeroni in #5855
  • Update Transformers 4.37.2 -> 4.38.2 by @Malrama in #5859
  • consolidate tabs for main model and concepts in generation panel by @maryhipp in #5848
  • ui: translations update from weblate by @weblate in #5832
  • Log a stack trace for invocation errors by @RyanJDick in #5853
  • Allow in place local installs of models by @brandonrising in #5852
  • Default model settings by @maryhipp in #5850
  • refactor(mm): update configs and schemas by @psychedelicious in #5846
  • updates for defaultModel by @maryhipp in #5866
  • Remove references to the no longer existing invokeai.app.services.mod… by @brandonrising in #5871
  • refactor: ✏️ canvas mask compositor naming by @joshistoast in #5873
  • fix(nodes): invocation cache clearing by @psychedelicious in #5880
  • fix(ui): fix URL for get image workflow by @maryhipp in #5882
  • feat(ui): UI papercuts by @maryhipp in #5881
  • fix(ui): only show default settings on main models by @maryhipp in #5884
  • feat(scripts): typegen improvements by @psychedelicious in #5878
  • feat(ui): add config_path to model update form for ckpt models by @maryhipp in #5883
  • fix(nodes): load config before doing anything else by @psychedelicious in #5877
  • invert canvas brush size hotkey setting by @joshistoast in #5875
  • refactor: model identifiers improvements by @psychedelicious in #5879
  • ui: translations update from weblate by @weblate in #5...

New in InvokeAI 4.0.0 RC 5 (Mar 22, 2024)

  • RC5 has improved the default hashing experience, updated default ControlNet Processor quality for SDXL outputs, and addressed other minor bugs/issues found in RC testing.
  • A new node has also been added for masking by ID.

New in InvokeAI 4.0.0 RC 4 (Mar 20, 2024)

  • Fix(ui): do not provide auth headers for openapi.json
  • Ui: translations update from weblate
  • Ui: translations update from weblate
  • Add latent-upscale to communityNodes.md
  • Updated tooltip popovers
  • Ui: translations update from weblate
  • Ui: translations update from weblate
  • Ui: translations update from weblate
  • Update communityNodes.md
  • Ui: translations update from weblate
  • Chore: merge next
  • Feat: automated releases via github action
  • Fix problem of all installed models being assigned ""
  • Tidy the attention code (in preparation for regional prompting)
  • Ci: fix workflows
  • Remove attention map saving
  • Make model key assignment deterministic
  • Fix(canvas): use a corrected gradient mask for canvas pasteback
  • Update Transformers 4.37.2 -> 4.38.2
  • Consolidate tabs for main model and concepts in generation pane
  • Ui: translations update from weblate
  • Log a stack trace for invocation errors
  • Allow in place local installs of models
  • Default model settings
  • Refactor(mm): update configs and schemas
  • Updates for defaultModel
  • Remove references to the no longer existing invokeai.app.services.mod…
  • Refactor: ✏️ canvas mask compositor naming
  • Fix(nodes): invocation cache clearing
  • Fix(ui): fix URL for get image workflow
  • Feat(ui): UI papercuts
  • Fix(ui): only show default settings on main models
  • Feat(scripts): typegen improvements
  • Feat(ui): add config_path to model update form for ckpt models
  • Fix(nodes): load config before doing anything else
  • Invert canvas brush size hotkey setting
  • Refactor: model identifiers improvements
  • Ui: translations update from weblate
  • Add cover images to model manager
  • Discard current inpaint instance
  • Feat(ui): model manager UI pass
  • Remove civit install source
  • Feat(ui): allow inplace installs
  • Fix: workflows backcompat
  • Feat: default processors for controlnet & t2i adapter
  • Migrate models on service start
  • Remove old data migration ...

New in InvokeAI 4.0.0 RC 1 (Mar 12, 2024)

  • What's New:
  • New Model Manager:
  • The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:
  • Queued model downloads
  • Per-model preview images
  • Per-model default settings - choose a model’s default VAE, Scheduler, CFG Scale, etc.
  • User-defined trigger phrases for concepts/LoRAs and models - access by typing the < key in any prompt box
  • API key support for model marketplaces
  • Model Hashing:
  • When you first run v4.0.0, it will take a while to start up as it does a one-time hash of all of your model files.
  • Do not panic.
  • Hashes provide a stable identifier for a model that is the same across every platform.
  • If you don’t care about this, you can disable the hashing using the skip_model_hash setting in invokeai.yaml.
  • Canvas Improvements:
  • The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple “passes”, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.
  • The compositing settings on canvas allow for control over the gradient denoising process.
  • Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.
  • Bonus: Invoke Training (Beta):
  • As of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:
  • Textual Inversion Training
  • LoRA Training
  • Dreambooth Training
  • Pivotal Tuning Training
  • You can learn more about Invoke Training at https://github.com/invoke-ai/invoke-training
  • Minor UI/UX Enhancements:
  • Canvas Brush Size Scroll can now be inverted (Thanks @joshistoast!)
  • Images in the Canvas Staging Area can now be discarded individually (Thanks @joshistoast!)
  • Many small bug fixes and resolved papercuts

New in InvokeAI 3.7.0 (Feb 15, 2024)

  • Workflow Editor Improvements:
  • Workflow Linear View - Workflows are now able to be used in a sleek Linear View interface that hides the workflow and focuses on the image being generated! To enable this, from a workflow, click the "Use in Linear View" button next to the model name in the left sidebar.
  • Workflow Linear View inputs are now able to be re-ordered by dragging and dropping.
  • Other Changes:
  • DWPose is now the default OpenPose processor in Invoke - see Things to Know
  • Improved Seamless Tiling! Now even more seamless
  • Update diffusers version to 0.26.3
  • Various bug fixes

New in InvokeAI 3.6.3 RC 1 (Feb 8, 2024)

  • Significantly improved generation speeds
  • Workflow Library improvements
  • New Unified Canvas Hotkeys - Ctrl+Mouse Scroll can now change the brush size!
  • Installer & Updater improvements
  • Model Manager updates to model conversion and saving
  • Faster image saving

New in InvokeAI 3.6.2 (Jan 25, 2024)

  • UI/UX Overhaul Improvements
  • Based on community feedback, updates have been made to the new UI/UX See "Things to Know" below when upgrading from Invoke 3.4
  • Depth-Anything is now supported and is the default depth processor in Invoke
  • Remix image - similar to Use All, but allows you to create a new image by setting all parameters except the Seed
  • "About" menu can be found in settings. Displays Invoke & dependency versions
  • Ideal Size node is now a default node
  • Fixed LoRA renaming bug
  • Updated Workflow saving behavior

New in InvokeAI 3.6.1 (Jan 24, 2024)

  • UI/UX Overhaul Improvements: Based on community feedback, updates have been made to the new UI/UX See "Things to Know" below when upgrading from Invoke 3.4
  • Depth-Anything is now supported and is the default depth processor in Invoke
  • Remix image - similar to Use All, but allows you to create a new image by setting all parameters except the Seed
  • "About" menu can be found in settings. Displays Invoke & dependency versions
  • Ideal Size node is now a default node
  • Fixed LoRA renaming bug

New in InvokeAI 3.6.0 (Jan 12, 2024)

  • UI/UX Overhaul:
  • We're overhauling our brand as we continue to grow into serving businesses & enterprises with solving their deployment challenges for generative AI. At our core, we're the same - Same mission, same team, same commitment to OSS.

New in InvokeAI 3.6.0 RC 5 (Jan 8, 2024)

  • UI/UX Overhaul:
  • The InvokeAI application has undergone a major design overhaul, focusing on make the application easier and more efficient to use.
  • Performance improvements by implementing bfloat16 instead of float16 for compatible systems.

New in InvokeAI 3.6.0 RC 2 (Jan 3, 2024)

  • ui: redesign followups 2 by @psychedelicious in #5374
  • Sisco/docker allow relative paths for invokeai data by @dsisco11 in #5344
  • define tooltip color, optional new logo by @maryhipp in #5375
  • Updater suggest db backup when installing RC by @Millu in #5381
  • ui: edesign followups 3 by @psychedelicious in #5385
  • Release: v3.6.0rc2 by @Millu in #5386

New in InvokeAI 3.6.0 RC 1 (Jan 2, 2024)

  • UI/UX Overhaul:
  • The InvokeAI application has undergone a major design overhaul, focusing on make the application easier and more efficient to use.

New in InvokeAI 3.5.1 (Dec 29, 2023)

  • Fixed bug with multiple embeddings
  • Added Tiled Upscaling to Default Workflows (Beta)
  • Respect use of torch-sdp from config.yaml

New in InvokeAI 3.5.0 (Dec 28, 2023)

  • Workflow Library:
  • Until now, a workflow could only be associated with an image, or be downloaded as JSON.
  • The Workflow Library allows workflows to be saved independently to the database. The UI provides sorting and filtering options to manage them.
  • With the Workflow Library, we can now ship default workflows directly in the app. You’ll see a couple on the Default tab. As the InvokeAI application evolves we will keep these workflows up-to-date, and regularly add more.
  • Other Enhancements:
  • More capable node updating
  • Better errors when your workflow doesn’t match your installed nodes
  • Community node packs auto-report their name, so if your workflow needs nodes you don’t have installed, you’ll see what’s missing
  • Custom field types for nodes
  • Tiled upscaling nodes (BETA)
  • Added many missing translation strings
  • Gallery auto-scroll
  • Developer Changes:
  • There are a number of important changes for contributors in this release.
  • Frontend/UI
  • The biggest change is that the frontend build is no longer included in main. If you run the app off a clone of the repo, you’ll need to build the frontend to use the UI. See the “Impact to Contributors” section on this PR #5253.
  • Other changes:
  • Moved from yarn to pnpm for package management
  • Updated many packages
  • Refactored all workflow schemas and types
  • Workflow migration logic implemented
  • Changes to release process
  • Backend Changes:
  • This release includes feature-flagged changes to the model manager and a new database migration utility.
  • Model Manager:
  • The Model Manager is partway through a redesign, to make it more capable and maintainable. The redesign will support a much better user eperience for downloading, installing and managing models. The changes are in the repo, but implemented separately from the user-facing app.

New in InvokeAI 3.5.0 RC 3 (Dec 19, 2023)

  • More capable node updating
  • Better errors when your workflow doesn’t match your installed nodes
  • Community node packs auto-report their name, so if your workflow needs nodes you don’t have installed, you’ll see what’s missing
  • Tiled upscaling nodes (BETA)
  • Added many missing translation strings
  • Gallery auto-scroll

New in InvokeAI 3.4.0 Post 2 / 3.5.0 RC 1 (Dec 15, 2023)

  • What's New in 3.5.0:
  • Workflow Library:
  • Until now, a workflow could only be associated with an image, or be downloaded as JSON.
  • The Workflow Library allows workflows to be saved independently to the database. The UI provides sorting and filtering options to manage them.
  • With the Workflow Library, we can now ship default workflows directly in the app. You’ll see a couple on the Default tab. As the InvokeAI application evolves we will keep these workflows up-to-date, and regularly add more.
  • Custom Field Types in Nodes
  • Previously, node authors had to use built-in field types for inputs and outputs of their nodes. While this covered many use-cases, we recognized the need for “custom” field types.
  • This is now fully supported, and any pydantic model can be used as a field type.
  • Other Enhancements:
  • More capable node updating
  • Better errors when your workflow doesn’t match your installed nodes
  • Community node packs auto-report their name, so if your workflow needs nodes you don’t have installed, you’ll see what’s missing
  • Tiled upscaling nodes (BETA)
  • Added many missing translation strings
  • Gallery auto-scroll

New in InvokeAI 3.4.0 Post 2 (Dec 10, 2023)

  • Fixed LoRAs being applied twice (3.4.0post2)

New in InvokeAI 3.4.0 Post 1 (Nov 18, 2023)

  • This post release fixes a bug with generations that prevented Image to Image generations from running successfully.

New in InvokeAI 3.4.0 (Nov 17, 2023)

  • LCM & LCM-LoRA are now supported in InvokeAI. See the note in Things to Know below.
  • Community Nodes can now be installed by adding them to the nodes folder of the InvokeAI installation
  • Core Nodes can be automatically updated via the Workflow Editor
  • Large performance improvements: reduced LoRA & text encoder loading times, improved token handling
  • HiRes Fix is has returned!
  • FreeU is supported for workflows
  • ControlNets & T2I-Adapters can now be used together
  • Multi-Image IP-Adapter is now available in Nodes Workflows (Instant LoRA!)
  • Intermediate images are no longer saved to disk
  • ControlNets in .safetensors format are now able to be used (SD1.5 & SD2 only). See the note in Things to Know below.
  • VAE is now able to be recalled with "Use All"
  • Color Picker Improvements
  • Expanded translations (Dutch, Italian and Chinese are almost entirely complete!)
  • InvokeAI now uses Pydantic2 and the latest FastAPI, making certain functions (like Iterate nodes) much more efficient.

New in InvokeAI 3.4.0 RC 3 (Nov 7, 2023)

  • InvokeAI now uses Pydantic2 and the latest FastAPI, making certain functions (like Iterate nodes) much more efficient.

New in InvokeAI 3.4.0 RC 1 (Oct 20, 2023)

  • Community Nodes can now be installed by adding them to the nodes folder of the InvokeAI installation
  • HiRes Fix is has returned for SD1.5 generations!
  • InvokeAI now uses Pydantic2 and the latest FastAPI
  • Multi-Image IP-Adapter is now available (Instant LoRA!)
  • Expanded translations (Dutch and Chinese are almost entirely complete!)
  • Color Picker Improvements
  • Performance enhancements & bug fixes

New in InvokeAI 3.3.0 Post 3 (Oct 16, 2023)

  • T2I-Adapter is now supported
  • Models can be downloaded through the Model Manager or the model download function in the launcher script.
  • Multi IP-Adapter Support!
  • New nodes for working with faces
  • Improved model load times from disk
  • Hotkey fixes
  • Expanded translations (for many languages!)
  • Unified Canvas improvements and bug fixes

New in InvokeAI 3.3.0 Post 2 (Oct 15, 2023)

  • 3.3.0post2 is a minor hotfix that corrects incompatibility issues when installing xformers, updates translations, and corrects incompatibilities with systems that have versions of glibc<2.3.3.

New in InvokeAI 3.3.0 Post 1 (Oct 13, 2023)

  • T2I-Adapter is now supported
  • Models can be downloaded through the Model Manager or the model download function in the launcher script.
  • Multi IP-Adapter Support!
  • New nodes for working with faces
  • Improved model load times from disk
  • Hotkey fixes
  • Expanded translations (for many languages!)
  • Unified Canvas improvements and bug fixes

New in InvokeAI 3.3.0 Pre-release (Oct 13, 2023)

  • T2I-Adapter is now supported
  • Models can be downloaded through the Model Manager or the model download function in the launcher script.
  • Multi IP-Adapter Support!
  • New nodes for working with faces
  • Improved model load times from disk
  • Hotkey fixes
  • Expanded translations (for many languages!)
  • Unified Canvas improvements and bug fixes

New in InvokeAI 3.3.0 RC 1 (Oct 11, 2023)

  • T2I-Adapter is now supported
  • Models can be downloaded through the Model Manager or the model download function in the launcher script.
  • Multi IP-Adapter Support!
  • New nodes for working with faces
  • Improved model load times from disk
  • Hotkey fixes
  • Unified Canvas improvements and bug fixes

New in InvokeAI 3.2.0 (Oct 3, 2023)

  • What's New in 3.2.0:
  • Queueing:
  • This is a powerful new feature that allows you to queue multiple image generations, create batches, manage the queue, and gain insight into generations.
  • IP-Adapter is now supported:
  • Instructions on getting started with IP-Adapter are located in the "Things to Know" section below
  • TAESD is now supported. You can download TAESD or TAESDXL through the model manager UI
  • LoRAs and ControlNets are now able to be recalled with the "Use All" function
  • New nodes! Load prompts from a file, string manipulation, and expanded math functions
  • Node caching - improve performance by using previously cached generation values
  • V-prediction for SD1.5 is now supported
  • Importing images from previous versions of InvokeAI has been fixed
  • Database maintenance script can be run with invokeai-db-maintenance
  • View image metadata with the invokeai-metadata command
  • Workflow Editor UI/UX improvements
  • Unified Canvas improvements & bug fixes

New in InvokeAI 3.2.0 RC 1 (Sep 21, 2023)

  • TAESD is now supported. You can download TAESD or TAESDXL through the model manager UI as you would any other model from HuggingFace.
  • Image Metadata is now preserved with Workflows
  • LoRAs are now able to be recalled with the "Use All" function
  • New nodes! Load prompts from a file, String manipulation, and expanded math functions
  • Importing images from previous versions of InvokeAI has be fixed
  • Database maintenence script can be ran with invokeai-db-maintenance
  • View image metadata with the invokeai-metadata command
  • Queueing:
  • This is a powerful new feature that allows you to queue multiple image generations, create batches, manage the queue, and have insight into generations.
  • IP-Adapter is now supported:
  • To get started with IP-Adapter, download the model manually or through the model manager and select it under the "Control Adapter" settings. Once you have provided an image, it will use the image to help prompt the model during image generation.

New in InvokeAI 3.1.1 (Sep 13, 2023)

  • What's New in 3.1.1:
  • Node versioning
  • Nodes now support polymorphic inputs (inputs which are a single of a given type or list of a given type, e.g. Union[str, list[str]])
  • SDXL Inpainting Model is now supported
  • Inpainting & Outpainting Improvements
  • Workflow Editor UI Improvements
  • Model Manager Improvements
  • Fixed configuration script trying to set VRAM on macOS

New in InvokeAI 3.1.1 RC 1 (Sep 8, 2023)

  • Node versioning
  • Nodes now support polymorphic inputs (inputs which are a single of a given type or list of a given type, e.g. Union[str, list[str]])
  • SDXL Inpainting Model is now supported
  • Inpainting & Outpainting Improvements
  • Workflow Editor UI Improvements
  • Model Manager Improvements
  • Fixed configuration script trying to set VRAM on macOS

New in InvokeAI 3.1.0 RC 1 (Aug 31, 2023)

  • Workflows:
  • InvokeAI 3.1.0 introduces a new powerful tool to aide the image generation process in the Workflow Builder. Workflows combine the power a nodes based software with the ease of use of a GUI to deliver the best of both worlds.
  • The Node Editor allows you to build the custom image generation workflows you need, as well as enables you to create and use custom nodes, making InvokeAI a fully extensible platform.
  • To get started with nodes in InvokeAI, take a look at our example workflows, or some of the custom Community Nodes.
  • A zip file of example workflows can be found at the bottom of this page under Assets.
  • Other New Features:
  • Expanded SDXL support across all areas of InvokeAI.
  • Enhanced In-painting & Out-painting capabilities.
  • Improved Control Asset Usage, including from the Unified Canvas.
  • Newly added nodes for better functionality.
  • Seamless Tiling is back, with SDXL support!
  • Improved In-inpainting & Out-painting
  • Generation statistics can be viewed from the command line after generation
  • Hot-reloading is now available for python files in the application
  • LoRAs are sorted alphabetically
  • Symbolic links to directories in the autoimport folder are now supported
  • UI/UX Improvements
  • Interactively configure image generation options, the attention system, and the VRAM cache
  • ...and so much more! You can view the full change log here

New in InvokeAI 3.0.2 Post 1 (Aug 13, 2023)

  • Support for LoRA models in diffusers format
  • Warn instead of crashing when a corrupted model is detected
  • Bug fix for auto-adding to a board

New in InvokeAI 3.0.2 (Aug 11, 2023)

  • LoRA support for SDXL is now available
  • Mutli-select actions are now supported in the Gallery
  • Images are automatically sent to the board that is selected at invocation
  • Images from previous versions of InvokeAI are able to imported with the invokeai-import-images command
  • Inpainting models imported from A1111 will now work with InvokeAI (see upgrading note)
  • Model merging functionality has been fixed
  • Improved Model Manager UI/UX
  • InvokeAI 3.0 can be served via HTTPS
  • Execution statistics are visible in the terminal after each invocation
  • ONNX models are now supported for use with Text2Image
  • Pydantic errors when upgrading inplace have been resolved
  • Code formatting is now part of the CI/CD pipeline

New in InvokeAI 3.0.2 Pre-release (Aug 9, 2023)

  • LoRAs support for SDXL is now available in the UI
  • Mutli-select actions are now supported in the Gallery
  • Images are automatically sent to the board that is selected at invocation
  • Inpainting models imported from A1111 will now work with InvokeAI
  • Model merging functionality has been fixed
  • Improved Model Manager UI/UX
  • InvokeAI can be served via HTTPS
  • Execution statistics are visible in the terminal after each invocation
  • ONNX is now supported for use with InvokeAI
  • Pydantic errors when upgrade inplace have been resolved
  • Code formatting is now part of the CI/CD pipeline

New in InvokeAI 3.0.1 Hotfix 3 (Jul 30, 2023)

  • This release containss a proposed hotfix for the Windows install OSError crashes that began appearing in 3.0.1. In addition, the following bugs have been addressed:
  • Correct issue of some SD-1 safetensors models could not be loaded or converted
  • The models_dir configuration variable used to customize the location of the models directory is now working properly
  • Fixed crashes of the text-based installer when the number of installed LoRAs and other models exceeded 72
  • SDXL metadata is now set and retrieved properly
  • Correct post1's crash when performing configure with --yes flag.
  • Correct crashes in the CLI model installer

New in InvokeAI 3.0.1 (Jul 29, 2023)

  • What's New in v3.0.1:
  • Stable Diffusion XL support in the Text2Image and Image2Image (but not the Unified Canvas).
  • Can install and run both diffusers-style and .safetensors-style SDXL models.
  • Download Stable Diffusion XL 1.0 (base and refiner) using the model installer or the Web UI-based Model Manager
  • Invisible watermarking, which is recommended for use with Stable Diffusion XL, is now available as an option in the Web UI settings dialogue.
  • The NSFW detector, which was missing in 3.0.0, is again available. It can be activated as an option in the settings dialogue.
  • During initial installation, a set of recommended ControlNet, LoRA and Textual Inversion embedding files will now be downloaded and installed by default, along with several "starter" main models.
  • User interface cleanup to reduce visual clutter and increase usability.
  • Recent Changes:
  • Since RC3, the following has changed:
  • Fixed crash on Macintosh M1 machines when rendering SDXL images
  • Fixed black images when generating on Macintoshes using the Unipc scheduler (falls back to CPU; slow)

New in InvokeAI 3.0.1 RC 3 (Jul 27, 2023)

  • Added compatibility with Python 3.11
  • Updated diffusers to 0.19.0
  • Cleaned up console logging - can now change logging level as described in the docs
  • Prevent crashes on edge conditions caused by configuration of the optional, off-by-default, NSFW and watermarking methods
  • Added download of an updated SDXL VAE "sdxl-vae-fix" that may correct certain image artifacts in SDXL-1.0 models
  • Prevent web crashes during certain resize operations

New in InvokeAI 3.0.1 RC 2 (Jul 27, 2023)

  • Several bugs discovered in RC1 have now been corrected:
  • Stable Diffusion-1 and Stable Diffusion-2 all-in-one .safetensors and .ckpt models did not load or convert - FIXED
  • The sd_xl_base.yaml and sd_xl_refiner.yaml files were not being installed by the configure script - FIXED
  • Generation metadata wasn't being stored in images - FIXED
  • During an update, the configure script would sometimes clear out all model definitions in models.yaml - FIXED
  • In addition, we have added a new SDXL VAE "sdxl-vae-fix" which may improve image artifacts reported by some users. This is the same VAE as madebyollin/sdxl-vae-fp16-fix, which was designed to run in fp16 mode. However, InvokeAI still needs to run this VAE in fp32 mode, so we slightly altered the name to avoid confusion.

New in InvokeAI 3.0.1 RC1 (Jul 26, 2023)

  • Stable Diffusion XL support in the Text2Image, Image2Image and Unified Canvas interfaces.
  • Can install and run both diffusers-style and .safetensors-style SDXL models.
  • Download Stable Diffusion XL 1.0 (base and refiner) using the model installer or the Web UI-based Model Manager
  • Invisible watermarking, which is recommended for use with Stable Diffusion XL, is now available as an option in the Web UI settings dialogue.
  • The NSFW detector, which was missing in 3.0.0, is again available. It can be activated as an option in the settings dialogue.
  • During initial installation, a set of recommended ControlNet, LoRA and Textual Inversion embedding files will now be downloaded and installed biy default, along with several "starter" main models.
  • User interface cleanup to reduce visual clutter and increase usability.

New in InvokeAI 3.0.0 (Jul 23, 2023)

  • Web User Interface:
  • A ControlNet interface that gives you fine control over such things as the posture of figures in generated images by providing an image that illustrates the end result you wish to achieve.
  • A Dynamic Prompts interface that lets you generate combinations of prompt elements.
  • Preliminary support for Stable Diffusion XL the latest iteration of Stability AI's image generation models.
  • A redesigned user interface which makes it easier to access frequently-used elements, such as the random seed generator.
  • The ability to create multiple image galleries, allowing you to organize your generated images topically or chronologically.
  • An experimental Nodes Editor that lets you design and execute complex image generation operations using a point-and-click interface. To activate this, please use the settings icon at the upper right of the Web UI.
  • Macintosh users can now load models at half-precision (float16) in order to reduce the amount of RAM.
  • Advanced users can choose earlier CLIP layers during generation to produce a larger variety of images.
  • Long prompt support (>77 tokens).
  • Memory and speed improvements.

New in InvokeAI 3.0.0 RC 2 (Jul 20, 2023)

  • A ControlNet interface that gives you fine control over such things as the posture of figures in generated images by providing an image that illustrates the end result you wish to achieve.
  • A Dynamic Prompts interface that lets you generate combinations of prompt elements.
  • Preliminary support for Stable Diffusion XL the latest iteration of Stability AI's image generation models.
  • A redesigned user interface which makes it easier to access frequently-used elements, such as the random seed generator.
  • The ability to create multiple image galleries, allowing you to organize your generated images topically or chronologically.
  • A graphical Nodes Editor that lets you design and execute complex image generation operations using a point-and-click interface.
  • Macintosh users can now load models at half-precision (float16) in order to reduce the amount of RAM.
  • Advanced users can choose earlier CLIP layers during generation to produce a larger variety of images.
  • Long prompt support (>77 tokens).
  • Memory and speed improvements.

New in InvokeAI 3.0.0 RC 1 (Jul 19, 2023)

  • Web User Interface:
  • A ControlNet interface that gives you fine control over such things as the posture of figures in generated images by providing an image that illustrates the end result you wish to achieve.
  • A Dynamic Prompts interface that lets you generate combinations of prompt elements.
  • Preliminary support for Stable Diffusion XL the latest iteration of Stability AI's image generation models.
  • A redesigned user interface which makes it easier to access frequently-used elements, such as the random seed generator.
  • The ability to create multiple image galleries, allowing you to organize your generated images topically or chronologically.
  • A graphical Nodes Editor that lets you design and execute complex image generation operations using a point-and-click interface.
  • Macintosh users can now load models at half-precision (float16) in order to reduce the amount of RAM.
  • Advanced users can choose earlier CLIP layers during generation to produce a larger variety of images.
  • Long prompt support (>77 tokens).
  • Memory and speed improvements.
  • The WebUI can now be launched from the command line using either invokeai-web (preferred new way) or invokeai --web (deprecated old way).
  • Command Line Tool:
  • The previous command line tool has been removed and replaced with a new developer-oriented tool invokeai-node-cli that allows you to experiment with InvokeAI nodes.
  • Installer:
  • The console-based model installer, invokeai-model-install has been redesigned and now provides tabs for installing checkpoint models, diffusers models, ControlNet models, LoRAs, and Textual Inversion embeddings. You can install models stored locally on disk, or install them using their web URLs or Repo_IDs.
  • Internal:
  • Internally the code base has been completely rewritten to be much easier to maintain and extend. Importantly, all image generation options are now represented as "nodes", which are small pieces of code that transform inputs into outputs and can be connected together into a graph of operations. Generation and image manipulation operations can now be easily extended by writing a new InvokeAI nodes.

New in InvokeAI 3.0.0 Beta 10 (Jul 19, 2023)

  • Recent fixes:
  • Stable Diffusion XL (SDXL) 0.9 support in the node editor. See Getting Started with SDXL
  • Stable Diffusion XL models added to the optional starter models presented by the model installer
  • Memory and performance improvements for XL models (thanks to @StAlKeR7779)
  • Image upscaling using the latest version of RealESRGAN (fixed thanks to @psychedelicious )
  • VRAM optimizations to allow SDXL to run on 8 GB VRAM environments.
  • Feature-complete Model Manager in the Web GUI to provide online model installation, configuration and deletion.
  • Recommended LoRA and ControlNet models added to model installer.
  • UI tweaks, including updated hotkeys.
  • Translation and tooltip fixes
  • Documentation fixes, including description of all options in invokeai.yaml
  • Improved support for half-precision generation on Macintoshes.
  • Improved long prompt support.
  • Fix "Package 'invokeai' requires a different Python:" error

New in InvokeAI 3.0.0 Beta 8 (Jul 19, 2023)

  • Recent fixes:
  • Stable Diffusion XL (SDXL) 0.9 support in the node editor. See Getting Started with SDXL
  • Image upscaling using the latest version of RealESRGAN
  • VRAM optimizations to allow SDXL to run on 8 GB VRAM environments.
  • Feature-complete Model Manager in the Web GUI to provide online model installation, configuration and deletion.
  • Recommended LoRA and ControlNet models added to model installer.
  • UI tweaks, including updated hotkeys.
  • Translation and tooltip fixes
  • Documentation fixes, including description of all options in invokeai.yaml
  • Improved support for half-precision generation on Macintoshes.
  • Improved long prompt support.
  • Fix "Package 'invokeai' requires a different Python:" error

New in InvokeAI 3.0.0 Beta 7 (Jul 18, 2023)

  • Stable Diffusion XL (SDXL) 0.9 support in the node editor. See Getting Started with SDXL
  • Image upscaling using the latest version of RealESRGAN
  • VRAM optimizations to allow SDXL to run on 8 GB VRAM environments.
  • Feature-complete Model Manager in the Web GUI to provide online model installation, configuration and deletion.
  • Recommended LoRA and ControlNet models added to model installer.
  • UI tweaks, including updated hotkeys.
  • Translation and tooltip fixes
  • Documentation fixes, including description of all options in invokeai.yaml
  • Improved support for half-precision generation on Macintoshes.
  • Improved long prompt support.

New in InvokeAI 3.0.0 Beta 6 (Jul 16, 2023)

  • InvokeAI will now load half-precision controlnet models
  • Controlnet image preview alignment has been fixed
  • Installer issues on the Windows platform have been addressed
  • Documentation fixes
  • Multiple UI tweaks

New in InvokeAI 3.0.0 Beta 5 (Jul 11, 2023)

  • Since beta-4, the following issues have been addressed:
  • Models are now retained in VRAM up to a maximum value specified by max_vram_cache_size, defaulting to 2.75 G. This avoids a 2-3s delay for moving models from RAM to VRAM with each generation. Make this value higher for large models. It can be safely be set to zero in order to free as much space as possible for large image generation (at the cost of speed).
  • New ability to specify the aspect ratio of generated images (portrait, landscape).
  • Better behavior when encountering an invalid, new, corrupted model in the models directory.
  • Disable hotkey for lightbox if lightbox is disabled
  • Multiple performance improvements in front end.

New in InvokeAI 3.0.0 Beta 4 (Jul 10, 2023)

  • Since beta-3, the following issues have been addressed:
  • Documentation in docs/ has been updated.
  • Fix for borked wndows mimetype registry
  • Add progress image node
  • If models.yaml file doesn't exist on startup, recreate it
  • Report processing stack traces to the console

New in InvokeAI 3.0.0 Beta 3 (Jul 8, 2023)

  • Recent changes:
  • Under some conditions, the installer was writing an extraneous version: line in invokeai.yaml. This caused the application to print out the version number and immediately exit. If this is happening to you, open invokeai.yaml in a text editor (such as Notepad), find the section that starts with Other: and delete this line and the version: line below it.
  • Fixes to the update script to allow it to run on Windows properly.
  • Upgrade to diffusers version 0.18.1, which is needed to support SDXL.

New in InvokeAI 3.0.0 Beta 2 (Jul 8, 2023)

  • Img2img and canvas uploads should now work
  • White screen of death issue should be improved (may need to reload a couple of times)
  • Version is printed out at startup time and when --version provided at command line
  • UI tweaks to the sliders and spinners
  • Web GUI support for adding, editing, deleting and merging models.
  • Preliminary support for generation at half precision (float16) on Macintosh/MPS systems.
  • Textual inversion tags can now be added to the prompt. Use the "<>" icon to see which tags are installed.
  • Multiple issues in the migrate script have been addressed. The script will now no longer overwrite models previously installed in the destination directory, and can be used to upgrade a root folder in place.
  • The model merging console-based interface is working again.
  • Support for selecting intermediate clip steps ("clip skip" option). Select "Show advanced options" in the settings dialogue to get this setting.
  • Added back ability to launch web interface with invokeai --web
  • The absence of an invokeai.yaml file in the root directory will no longer cause a crash.

New in InvokeAI 3.0.0 RC 1 (Jul 8, 2023)

  • Since alpha-8 the following issues have been addressed:
  • Web GUI support for adding, editing, deleting and merging models.
  • Preliminary support for generation at half precision (float16) on Macintosh/MPS systems.
  • Textual inversion tags can now be added to the prompt. Use the "<>" icon to see which tags are installed.
  • Multiple issues in the migrate script have been addressed. The script will now no longer overwrite models previously installed in the destination directory, and can be used to upgrade a root folder in place.
  • The model merging console-based interface is working again.
  • Support for selecting intermediate clip steps ("clip skip" option). Select "Show advanced options" in the settings dialogue to get this setting.
  • Added back ability to launch web interface with invokeai --web
  • The absence of an invokeai.yaml file in the root directory will no longer cause a crash.

New in InvokeAI 3.0.0 Alpha 8 (Jul 6, 2023)

  • add LoRA interface
  • rebuild front end (will fix white screen of death issue)

New in InvokeAI 3.0.0 Alpha 7 (Jul 5, 2023)

  • Model installer asks for confirmation before deleting unselected models.
  • Fix VRAM memory leak during generation.
  • Models with "/" in their names no longer break the model migration script.
  • Preliminary work on web-based model manager UI.

New in InvokeAI 3.0.0 Alpha 3 (Jul 2, 2023)

  • adding support for ESRGAN denoising strength by @tjennings in #2598
  • 2.3.0 Documentation Fixes by @lstein in #2609
  • fix two bugs in conversion of inpaint models from ckpt to diffusers m… by @lstein in #2620
  • Fix Incorrect Windows Environment Activation Location (Manual Installation Documentation) by @blhook in #2627
  • small change to pull esrgan denoise strength through to the generate API. by @tjennings in #2623
  • Huge Docker Update - better caching, don't use root user, include dockerhub and more.... by @mauwii in #2597
  • add merge_group trigger to test-invoke-pip.yml by @mauwii in #2590
  • Strategize slicing based on free [V]RAM by @JPPhoto in #2572
  • Improve error messages from Textual Inversion and Merge scripts by @lstein in #2641
  • Added arabic locale files by @ParisNeo in #2561
  • Fix link to the installation documentation by @lstein in #2648
  • Add thresholding for all diffusers types by @JPPhoto in #2479
  • Fix typo and Hi-Res Bug by @hipsterusername in #2667
  • Fix perlin noise generator for diffusers tensors by @JPPhoto in #2678
  • [WebUI] Model Conversion by @blessedcoolant in #2616
  • fix minor typos by @fat-tire in #2666
  • Make install.bat.in point to correct configuration script by @zalo in #2680
  • build: lint/format ignores stats.html by @psychedelicious in #2681
  • [WebUI] Even off JSX string syntax by @dreglad in #2058
  • design: smooth progress bar animations by @ryanccn in #2685
  • skip huge workflows if not needed by @mauwii in https://github.com/invo...

New in InvokeAI 2.3.5 Post 2 (May 22, 2023)

  • This is a bugfix release. In previous versions, the built-in updating script did not update the Xformers library when the torch library was upgraded, leaving people with a version that ran on CPU only. Install this version to fix the issue so that it doesn't happen when updating to future versions of InvokeAI 3.0.0.
  • As a bonus, this version allows you to apply a checkpoint VAE, such as vae-ft-mse-840000-ema-pruned.ckpt to a diffusers model, without worrying about finding the diffusers version of the VAE. From within the web Model Manager, choose the diffusers model you wish to change, press the edit button, and enter the Location of the VAE file of your choice. The field will now accept either a .ckpt file, or a diffusers directory.

New in InvokeAI 2.3.5 Post 1 (May 19, 2023)

  • The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.
  • Other Improvements:
  • When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running (thanks to @pedantic79 for this).

New in InvokeAI 2.3.5 (May 1, 2023)

  • Fix the "import from directory" function in console model installer by @lstein in #3211
  • [Feature] Add support for LoKR LyCORIS format by @StAlKeR7779 in #3216
  • CODEOWNERS update - 2.3 branch by @lstein in #3230
  • Enable LoRAs to patch the text_encoder as well as the unet by @damian0815 in #3214
  • Improvements to the installation and upgrade processes by @lstein in #3186
  • Revert "improvements to the installation and upgrade processes" by @lstein in #3266
  • [Enhancement] distinguish v1 from v2 LoRA models by @lstein in #3175
  • Increase sha256 chunksize when calculating model hash by @lstein in #3162
  • Bump version number to 2.3.5-rc1 by @lstein in #3267
  • [Bugfix] Renames in 0.15.0 diffusers by @StAlKeR7779 in #3184

New in InvokeAI 2.3.5 RC 1 (Apr 27, 2023)

  • This release expands support for additional LoRA and LyCORIS models, upgrades diffusers to 0.15.1, and fixes a few bugs.
  • LoRA and LyCORIS Support Improvement:
  • A number of LoRA/LyCORIS fine-tune files (those which alter the text encoder as well as the unet model) were not having the desired effect in InvokeAI. This bug has now been fixed. Full documentation of LoRA support is available at InvokeAI LoRA Support.
  • Previously, InvokeAI did not distinguish between LoRA/LyCORIS models based on Stable Diffusion v1.5 vs those based on v2.0 and 2.1, leading to a crash when an incompatible model was loaded. This has now been fixed. In addition, the web pulldown menus for LoRA and Textual Inversion selection have been enhanced to show only those files that are compatible with the currently-selected Stable Diffusion model.
  • Support for the newer LoKR LyCORIS files has been added.
  • Diffusers 0.15.1:
  • This version updates the diffusers module to version 0.15.1 and is no longer compatible with 0.14. This provides a number of performance improvements and bug fixes.
  • Performance Improvements:
  • When a model is loaded for the first time, InvokeAI calculates its checksum for incorporation into the PNG metadata. This process could take up to a minute on network-mounted disks and WSL mounts. This release noticeably speeds up the process.
  • Bug Fixes:
  • The "import models from directory" and "import from URL" functionality in the console-based model installer has now been fixed.

New in InvokeAI 2.3.4 Post 1 (Apr 15, 2023)

  • [FEATURE] Lora support in 2.3 by @lstein in #3072
  • [FEATURE] LyCORIS support in 2.3 by @StAlKeR7779 in #3118
  • [Bugfix] Pip - Access is denied durring installation by @StAlKeR7779 in #3123
  • ui: translations update from weblate by @weblate in #2804
  • [Enhancement] save name of last model to disk whenever model changes by @lstein in #3102

New in InvokeAI 2.3.4 (Apr 10, 2023)

  • FEATURE] Lora support in 2.3 by @lstein in #3072
  • [FEATURE] LyCORIS support in 2.3 by @StAlKeR7779 in #3118
  • [Bugfix] Pip - Access is denied durring installation by @StAlKeR7779 in #3123
  • ui: translations update from weblate by @weblate in #2804
  • [Enhancement] save name of last model to disk whenever model changes by @lstein in #3102

New in InvokeAI 2.3.4 RC 1 (Apr 8, 2023)

  • This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.
  • LoRA and LyCORIS Support:
  • LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)
  • To use LoRA/LyCORIS models in InvokeAI:
  • Download the .safetensors files of your choice and place in /path/to/invokeai/loras. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually.
  • Add withLora(lora-file,weight) to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file named loras/sushi.safetensors is present:
  • family sitting at dinner table eating sushi withLora(sushi,0.9)
  • family sitting at dinner table eating sushi withLora(sushi, 0.75)
  • family sitting at dinner table eating sushi withLora(sushi)
  • Multiple withLora() prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. Negative weights are also allowed, which can lead to some interesting effects.
  • Generate as you usually would! If you find that the image is too "crisp" try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you'll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA's training. Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.
  • You can change the location of the loras directory by passing the --lora_directory option to `invokeai.
  • New WebUI LoRA and Textual Inversion Buttons:
  • This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.
  • Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora() or <textual-inversion> prompt fragment into the positive prompt. The number in parentheses indicates the number of trigger terms currently in the prompt. You may click the button again and deselect the LoRA or trigger to remove it from the prompt, or simply edit the prompt directly.
  • Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.
  • By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on "Show Textual Inversions from HF Concepts Library." When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.
  • Minor features and fixes:
  • This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip utility is kept up to date.

New in InvokeAI 2.3.3 (Apr 1, 2023)

  • Enhance model autodetection during import by @lstein in #3043
  • Correctly load legacy checkpoint files built on top of SD 2.0/2.1 bases, such as Illuminati 1.1 by @lstein in #3058
  • Add support for the TI embedding file format used by negativeprompts.safetensors by @lstein in #3045
  • Keep torch version at 1.13.1 by @JPPhoto in #2985
  • Fix textual inversion documentation and code by @lstein in #3015
  • fix corrupted outputs/.next_prefix file by @lstein in #3020
  • fix batch generation logfile name to be compatible with Windows OS by @lstein in #3018
  • Security patch: Scan all pickle files, including VAEs; default to safetensor loading by @lstein in #3011
  • prevent infinite loop when launching developer's console by @lstein in #3016
  • Prettier console-based frontend for invoke.sh on Linux systems with "dialog" installed by Joshua Kimsey.
  • ROCM debugging recipe from @EgoringKosmos

New in InvokeAI 2.3.3 RC 7 (Mar 31, 2023)

  • Enhance model autodetection during import by @lstein in #3043
  • Correctly load legacy checkpoint files built on top of SD 2.0/2.1 bases, such as Illuminati 1.1 by @lstein in #3058
  • Add support for the TI embedding file format used by negativeprompts.safetensors by @lstein in #3045
  • Keep torch version at 1.13.1 by @JPPhoto in #2985
  • Fix textual inversion documentation and code by @lstein in #3015
  • fix corrupted outputs/.next_prefix file by @lstein in #3020
  • fix batch generation logfile name to be compatible with Windows OS by @lstein in #3018
  • Security patch: Scan all pickle files, including VAEs; default to safetensor loading by @lstein in #3011
  • prevent infinite loop when launching developer's console by @lstein in #3016
  • Prettier console-based frontend for invoke.sh on Linux systems with "dialog" installed by Joshua Kimsey

New in InvokeAI 2.3.3 RC 3 (Mar 28, 2023)

  • Enhance model autodetection during import by @lstein in #3043
  • Correctly load legacy checkpoint files built on top of SD 2.0/2.1 bases, such as Illuminati 1.1 by @lstein in #3058
  • Keep torch version at 1.13.1 by @JPPhoto in #2985
  • Fix textual inversion documentation and code by @lstein in #3015
  • fix corrupted outputs/.next_prefix file by @lstein in #3020
  • fix batch generation logfile name to be compatible with Windows OS by @lstein in #3018
  • Security patch: Scan all pickle files, including VAEs; default to safetensor loading by @lstein in #3011
  • prevent infinite loop when launching developer's console by @lstein in #3016
  • Prettier console-based frontend for invoke.sh on Linux systems with "dialog" installed.

New in InvokeAI 2.3.3 RC 1 (Mar 26, 2023)

  • Since version 2.3.2 the following bugs have been fixed:
  • When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
  • Textual inversion will select an appropriate batchsize based on whether xformers is active, and will default to xformers enabled if the library is detected.
  • The batch script log file names have been fixed to be compatible with Windows.
  • Occasional corruption of the .next_prefix file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
  • An infinite loop when opening the developer's console from within the invoke.sh script has been corrected.
  • What's Changed:
  • Keep torch version at 1.13.1 by @JPPhoto in #2985
  • Fix textual inversion documentation and code by @lstein in #3015
  • fix corrupted outputs/.next_prefix file by @lstein in #3020
  • fix batch generation logfile name to be compatible with Windows OS by @lstein in #3018
  • Security patch: Scan all pickle files, including VAEs; default to safetensor loading by @lstein in #3011
  • prevent infinite loop when launching developer's console by @lstein in #3016
  • Prettier console-based frontend for invoke.sh on Linux systems with "dialog" installed.

New in InvokeAI 2.3.2 (Mar 13, 2023)

  • fix python 3.9 compatibility by @mauwii in #2780
  • fixes crashes on merge in both WebUI and console by @lstein in #2800
  • hotfix for broken merge function by @lstein in #2801
  • [ui]: 2.3 hotfixes by @psychedelicious in #2806
  • restore previous naming scheme for sd-2.x models: by @lstein in #2820
  • quote output, embedding and autoscan directores in invokeai.init by @lstein in #2827
  • Introduce pre-commit, black, isort, ... by @mauwii in #2822
  • propose more restrictive codeowners by @lstein in #2781
  • fix newlines causing negative prompt to be parsed incorrectly by @lstein in #2838
  • Prevent crash when converting models from within CLI using legacy model URL by @lstein in #2846
  • [WebUI] Fix 'Use All' Params not Respecting Hi-Res Fix by @blhook in #2840
  • Disable built-in NSFW checker on models converted with --ckpt_convert by @lstein in #2908
  • Dynamic prompt generation script for parameter scans by @lstein in #2831

New in InvokeAI 2.3.2 Pre-release (Mar 12, 2023)

  • Bugfixes:
  • Since version 2.3.1 the following bugs have been fixed:
  • Black images appearing for potential NSFW images when generating with legacy checkpoint models and both --no-nsfw_checker and --ckpt_convert turned on.
  • Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a "base" model (512 pixels) or the 768-pixel SD-2.1 model.
  • The "Use All" button was not restoring the Hi-Res Fix setting on the WebUI
  • When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
  • Crashes that occurred during model merging.
  • Restore previous naming of Stable Diffusion base and 768 models.
  • Upgraded to latest versions of diffusers, transformers, safetensors and accelerate libraries upstream. We hope that this will fix the assertion NDArray > 2**32 issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.
  • As part of the upgrade to diffusers, the location of the diffusers-based models has changed from models/diffusers to models/hub. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your models/diffusers directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.
  • New "Invokeai-batch" script:
  • 2.3.2 introduces a new command-line only script called invokeai-batch that can be used to generate hundreds of images from prompts and settings that vary systematically. This can be used to try the same prompt across multiple combinations of models, steps, CFG settings and so forth. It also allows you to template prompts and generate a combinatorial list like:
  • a shack in the mountains, photograph
  • a shack in the mountains, watercolor
  • a shack in the mountains, oil painting
  • a chalet in the mountains, photograph
  • a chalet in the mountains, watercolor
  • a chalet in the mountains, oil painting
  • a shack in the desert, photograph
  • If you have a system with multiple GPUs, or a single GPU with lots of VRAM, you can parallelize generation across the combinatorial set, reducing wait times and using your system's resources efficiently (make sure you have good GPU cooling).
  • To try invokeai-batch out. Launch the "developer's console" using the invoke launcher script, or activate the invokeai virtual environment manually. From the console, give the command invokeai-batch --help in order to learn how the script works and create your first template file for dynamic prompt generation.
  • What's Changed:
  • fix python 3.9 compatibility by @mauwii in #2780
  • fixes crashes on merge in both WebUI and console by @lstein in #2800
  • hotfix for broken merge function by @lstein in #2801
  • [ui]: 2.3 hotfixes by @psychedelicious in #2806
  • restore previous naming scheme for sd-2.x models: by @lstein in #2820
  • quote output, embedding and autoscan directores in invokeai.init by @lstein in #2827
  • Introduce pre-commit, black, isort, ... by @mauwii in #2822
  • propose more restrictive codeowners by @lstein in #2781
  • fix newlines causing negative prompt to be parsed incorrectly by @lstein in #2838
  • Prevent crash when converting models from within CLI using legacy model URL by @lstein in #2846
  • [WebUI] Fix 'Use All' Params not Respecting Hi-Res Fix by @blhook in #2840
  • Disable built-in NSFW checker on models converted with --ckpt_convert by @lstein in #2908
  • Dynamic prompt generation script for parameter scans by @lstein in #2831

New in InvokeAI 2.3.1 Post2 (Feb 27, 2023)

  • Enhanced support for model management:
  • InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
  • An Improved Installer Experience:
  • The installer now launches a console-based UI for setting and changing commonly-used startup options.
  • Image Symmetry Options:
  • There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options.
  • A New Unified Canvas Look:
  • This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout.
  • Model conversion and merging within the WebUI:
  • The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts.
  • An easier way to contribute translations to the WebUI:
  • We have migrated our translation efforts to Weblate, a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief translation guide for more information on how to contribute.
  • Numerous internal bugfixes and performance issues:
  • This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing.

New in InvokeAI 2.3.1 Post1 (Feb 25, 2023)

  • Enhanced support for model management:
  • InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
  • There are three ways of accessing the model management features:
  • From the WebUI, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.
  • Using the Model Installer App:
  • Choose option (5) download and install models from the invoke launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
  • Command-line users can start this app using the command invokeai-model-install.
  • Using the Command Line Client (CLI):
  • The !install_model and !convert_model commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
  • Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do not need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
  • Please see INSTALLING MODELS for more information on model management.
  • An Improved Installer Experience:
  • The installer now launches a console-based UI for setting and changing commonly-used startup options:
  • After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching invoke.sh/invoke.bat and entering option (6) change InvokeAI startup options
  • Command-line users can launch the new configure app using invokeai-configure.
  • This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch invoke.sh or invoke.bat and choose option (9) update InvokeAI . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
  • Command-line users can run this interface by typing invokeai-configure:
  • Symmetry Options:
  • There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror effect. In addition to generating some cool s, you can also use this to make side-by-side comparisons of how an will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the generation settings, or within the CLI by using the options --h_symmetry_time_pct and --v_symmetry_time_pct (these can be abbreviated to --h_sym and --v_sym like all other options).
  • A New Unified Canvas Look:
  • This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout:
  • Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the controls close to the itself:
  • Model conversion and merging within the WebUI:
  • The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts.
  • An easier way to contribute translations to the WebUI
  • We have migrated our translation efforts to Weblate, a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief translation guide for more information on how to contribute.
  • Numerous internal bugfixes and performance issues:
  • This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished.

New in InvokeAI 2.3.1 RC 3 (Feb 23, 2023)

  • This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
  • Enhanced support for model management:
  • InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
  • An Improved Installer Experience:
  • The installer now launches a console-based UI for setting and changing commonly-used startup options:
  • After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching invoke.sh/invoke.bat and entering option (6) change InvokeAI startup options
  • Command-line users can launch the new configure app using invokeai-configure.
  • This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch invoke.sh or invoke.bat and choose option (9) update InvokeAI . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
  • Image Symmetry Options:
  • There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options --h_symmetry_time_pct and --v_symmetry_time_pct (these can be abbreviated to --h_sym and --v_sym like all other options).
  • A New Unified Canvas Look:
  • This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout
  • Model conversion and merging within the WebUI:
  • The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts.
  • An easier way to contribute translations to the WebUI:
  • We have migrated our translation efforts to Weblate, a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief translation guide for more information on how to contribute.
  • Numerous internal bugfixes and performance issues:
  • This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished.

New in InvokeAI 2.3.1 RC 2 (Feb 23, 2023)

  • Enhanced support for model management:
  • InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
  • An Improved Installer Experience:
  • The installer now launches a console-based UI for setting and changing commonly-used startup options.
  • Image Symmetry Options:
  • There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options.
  • A New Unified Canvas Look:
  • This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout.
  • Model conversion and merging within the WebUI:
  • The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts.
  • Numerous internal bugfixes and performance issues:
  • This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished.

New in InvokeAI 2.3.1 RC 1 (Feb 23, 2023)

  • This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
  • Enhanced support for model management:
  • InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
  • There are three ways of accessing the model management features:
  • From the WebUI, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.
  • Choose option (5) download and install models from the invoke launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
  • Command-line users can start this app using the command invokeai-model-install.
  • The !install_model and !convert_model commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
  • Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do not need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
  • Please see INSTALLING MODELS for more information on model management.
  • An Improved Installer Experience:
  • The installer now launches a console-based UI for setting and changing commonly-used startup options:
  • After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching invoke.sh/invoke.bat and entering option (6) change InvokeAI startup options
  • This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch invoke.sh or invoke.bat and choose option (9) update InvokeAI . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
  • Image Symmetry Options:
  • There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options --h_symmetry_time_pct and --v_symmetry_time_pct (these can be abbreviated to --h_sym and --v_sym like all other options).
  • image
  • A New Unified Canvas Look:
  • This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout:
  • Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:
  • Model conversion and merging within the WebUI:
  • The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts.
  • Numerous internal bugfixes and performance issues
  • This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished.

New in InvokeAI 2.3.0 (Feb 9, 2023)

  • Migration to Stable Diffusion diffusers models:
  • Previous versions of InvokeAI supported the original model file format introduced with Stable Diffusion 1.4. In the original format, known variously as "checkpoint", or "legacy" format, there is a single large weights file ending with .ckpt or .safetensors. Though this format has served the community well, it has a number of disadvantages, including file size, slow loading times, and a variety of non-standard variants that require special-case code to handle. In addition, because checkpoint files are actually a bundle of multiple machine learning sub-models, it is hard to swap different sub-models in and out, or to share common sub-models. A new format, introduced by the StabilityAI company in collaboration with HuggingFace, is called diffusers and consists of a directory of individual models. The most immediate benefit of diffusers is that they load from disk very quickly. A longer term benefit is that in the near future diffusers models will be able to share common sub-models, dramatically reducing disk space when you have multiple fine-tune models derived from the same base.
  • When you perform a new install of version 2.3.0, you will be offered the option to install the diffusers versions of a number of popular SD models, including Stable Diffusion versions 1.5 and 2.1 (including the 768x768 pixel version of 2.1). These will act and work just like the checkpoint versions. Do not be concerned if you already have a lot of ".ckpt" or ".safetensors" models on disk! InvokeAI 2.3.0 can still load these and generate images from them without any extra intervention on your part.
  • To take advantage of the optimized loading times of diffusers models, InvokeAI offers options to convert legacy checkpoint models into optimized diffusers models. If you use the invokeai command line interface, the relevant commands are:
  • !convert_model -- Take the path to a local checkpoint file or a URL that is pointing to one, convert it into a diffusers model, and import it into InvokeAI's models registry file.
  • !optimize_model -- If you already have a checkpoint model in your InvokeAI models file, this command will accept its short name and convert it into a like-named diffusers model, optionally deleting the original checkpoint file.
  • !import_model -- Take the local path of either a checkpoint file or a diffusers model directory and import it into InvokeAI's registry file. You may also provide the ID of any diffusers model that has been published on the HuggingFace models repository and it will be downloaded and installed automatically.
  • The WebGUI offers similar functionality for model management.
  • For advanced users, new command-line options provide additional functionality. Launching invokeai with the argument --autoconvert <path to directory> takes the path to a directory of checkpoint files, automatically converts them into diffusers models and imports them. Each time the script is launched, the directory will be scanned for new checkpoint files to be loaded. Alternatively, the --ckpt_convert argument will cause any checkpoint or safetensors model that is already registered with InvokeAI to be converted into a diffusers model on the fly, allowing you to take advantage of future diffusers-only features without explicitly converting the model and saving it to disk.
  • Please see INSTALLING MODELS for more information on model management in both the command-line and Web interfaces.
  • Support for the XFormers Memory-Efficient Crossattention Package:
  • On CUDA (Nvidia) systems, version 2.3.0 supports the XFormers library. Once installed, thexformers package dramatically reduces the memory footprint of loaded Stable Diffusion models files and modestly increases image generation speed. xformers will be installed and activated automatically if you specify a CUDA system at install time.
  • The caveat with using xformers is that it introduces slightly non-deterministic behavior, and images generated using the same seed and other settings will be subtly different between invocations. Generally the changes are unnoticeable unless you rapidly shift back and forth between images, but to disable xformers and restore fully deterministic behavior, you may launch InvokeAI using the --no-xformers option. This is most conveniently done by opening the file invokeai/invokeai.init with a text editor, and adding the line --no-xformers at the bottom.
  • A Negative Prompt Box in the WebUI:
  • There is now a separate text input box for negative prompts in the WebUI. This is convenient for stashing frequently-used negative prompts ("mangled limbs, bad anatomy"). The [negative prompt] syntax continues to work in the main prompt box as well.
  • To see exactly how your prompts are being parsed, launch invokeai with the --log_tokenization option. The console window will then display the tokenization process for both positive and negative prompts.
  • Model Merging:
  • Version 2.3.0 offers an intuitive user interface for merging up to three Stable Diffusion models using an intuitive user interface. Model merging allows you to mix the behavior of models to achieve very interesting effects. To use this, each of the models must already be imported into InvokeAI and saved in diffusers format, then launch the merger using a new menu item in the InvokeAI launcher script (invoke.sh, invoke.bat) or directly from the command line with invokeai-merge --gui. You will be prompted to select the models to merge, the proportions in which to mix them, and the mixing algorithm. The script will create a new merged diffusers model and import it into InvokeAI for your use.
  • See MODEL MERGING for more details.
  • Textual Inversion Training:
  • Textual Inversion (TI) is a technique for training a Stable Diffusion model to emit a particular subject or style when triggered by a keyword phrase. You can perform TI training by placing a small number of images of the subject or style in a directory, and choosing a distinctive trigger phrase, such as "pointillist-style". After successful training, The subject or style will be activated by including <pointillist-style> in your prompt.
  • Previous versions of InvokeAI were able to perform TI, but it required using a command-line script with dozens of obscure command-line arguments. Version 2.3.0 features an intuitive TI frontend that will build a TI model on top of any diffusers model. To access training you can launch from a new item in the launcher script or from the command line using invokeai-ti --gui.
  • See TEXTUAL INVERSION for further details:
  • A New Installer Experience:
  • The InvokeAI installer has been upgraded in order to provide a smoother and hopefully more glitch-free experience. In addition, InvokeAI is now packaged as a PyPi project, allowing developers and power-users to install InvokeAI with the command pip install InvokeAI --use-pep517. Please see Installation for details.
  • Developers should be aware that the pip installation procedure has been simplified and that the conda method is no longer supported at all. Accordingly, the environments_and_requirements directory has been deleted from the repository.

New in InvokeAI 2.3.0 RC 1 (Feb 3, 2023)

  • There are multiple internal and external changes in this version of InvokeAI which greatly enhance the developer and user experiences respectively.

New in InvokeAI 2.2.5 (Jan 26, 2023)

  • WebUI:
  • The WebGUI now features a Model Manager that lets you load and edit models interatively. It also allows you to pick a folder to scan and import new .ckpt files @blessedcoolant
  • Add Unified Canvas Alternate UI Beta: We added a new alternative UI to the Unified Canvas that mimics traditional photo editing applications you might be familiar with. You can switch to this new UI in the Settings menu by activating the new toggle option. @blessedcoolant
  • Restore and Upscale hotkeys have been changed from ‘R’ and ‘U’ to ‘Shift+R’ and ‘Shift+U’ respectively. This was done to avoid accidental keystrokes triggering these operations. @blessedcoolant
  • Added Localization. Support has been added for Russian, Italian, Portuguese (Brazilian), German, Polish @blessedcoolant
  • Translators:
  • Russian: @netsvetaev
  • Italian: @Harvester62
  • Portuguese (Brazilian): @M-art-ucci
  • German: cofter
  • Polish: pejotr
  • Spanish: dreglad
  • If you are interested in translating InvokeAI to your language, please feel free to reach out to us on Discord.
  • CLI:
  • Add the --karras_max option to the command line. @lstein
  • Add the –version option to get the version of the app. @lstein
  • Remove requirement for Hugging Face token, now that it is no longer rqeuired. @ebr
  • Docker:
  • Optimize dockerfile. @mauwii
  • Allow usage of GPU’s in Docker. @xrd
  • Bug Fixes & Updates:
  • Fix not being able to load the model while inpainting when using the free_gpu_mem option. @rmagur1203
  • Various installer improvements. @lstein
  • Fix segfault error on MacOS when using homebrew. @ebr
  • Fix a None type error when nsfw_checker was turned on. @limonspb
  • Fix the number of tokens to cap to 75 and handle blends accordingly. @damian0815
  • [CLI] Fix the time step not displaying correctly during img2img. @wfng92
  • [WebUI] Fix the initial theme setting not displaying correctly in the selector after reload. @kasbah
  • [WebUI] Fix of Hires Fix on Img2Img tab @hipsterusername
  • Fix embeddings not working correctly. @blessedcoolant
  • Fix an issue where the —config launch argument was not being recognized. @blessedcoolant
  • Retrieve threshold from an image even if it is 0. @JPPhoto
  • Add –root_dir as an alternate arg for –root during launch.
  • Relax HuggingFace login requirements during setup. @ebr
  • Fixed an issue where the --no-patchmatch would not work. @lstein
  • Fixed a crash in img2img @lstein
  • Documentation, updates, typos and fixes. @limonspb, @lstein, @hipsterusername, @mauwii
  • Developer:
  • Add concurrency to Github actions. @mauwii
  • Github action to lint python files with pyflakes @keturn
  • Fix circular dependencies on the frontend @kasbah
  • Add Github action for linting the frontend. @kasbah
  • Fix all linting warnings on the frontend. @kasbah
  • Add auto formatting for the frontend. @kasbah
  • New Contributors:
  • @limonspb made their first contribution in #1968
  • @xrd made their first contribution in #1985
  • @kasbah made their first contribution in #1995
  • @zeptofine made their first contribution in #2020
  • @shapor made their first contribution in #2057
  • @thinkyhead made their first contribution in #1751
  • @tomosuto made their first contribution in #2092