Skip to content

feat: Qwen Image Edit 2511 — full pipeline with LoRA, GGUF, quantization, and UI#131

Open
lstein wants to merge 46 commits intomainfrom
feat/qwen-image-edit-2511
Open

feat: Qwen Image Edit 2511 — full pipeline with LoRA, GGUF, quantization, and UI#131
lstein wants to merge 46 commits intomainfrom
feat/qwen-image-edit-2511

Conversation

@lstein
Copy link
Copy Markdown
Owner

@lstein lstein commented Mar 24, 2026

Summary

Complete implementation of the Qwen Image Edit 2511 pipeline for InvokeAI, including text-to-image generation, image editing with reference images, LoRA support (including Lightning distillation), GGUF quantized transformers, and BitsAndBytes encoder quantization.

Key Features

  • Text-to-image and image editing via Qwen Image Edit 2511 model
  • GGUF quantized transformer support with separate Diffusers model as component source for VAE/text encoder
  • LoRA support including Lightning distillation LoRAs for fast 4/8-step generation
  • BitsAndBytes encoder quantization (int8/nf4) to reduce VRAM usage for the Qwen2.5-VL text encoder
  • Shift override for Lightning LoRA sigma schedules
  • Starter models: full Diffusers model, 4 GGUF quantization levels, 2 Lightning LoRAs, and a bundle
  • Frontend UI: Advanced panel with component source selector, encoder quantization dropdown, shift override control; scheduler hidden for this model type

Backend Changes

  • Denoise: FlowMatchEulerDiscreteScheduler integration, 2x2 patch packing/unpacking, reference latent concatenation along sequence dim, zero_cond_t modulation, LoRA application via LayerPatcher with sidecar patching for GGUF, shift override for Lightning
  • Text encoder: Edit-specific system prompt template, vision token expansion per reference image, hidden state extraction with token trimming, attention mask output, BitsAndBytes quantization support
  • VAE encode/decode: AutoencoderKLQwenImage with per-channel latents_mean/latents_std normalization, 5D frame dimension, pixel-space resize before encoding
  • Model loader: Component source field for GGUF models to provide VAE + text encoder from a separate Diffusers model
  • LoRA: Loader invocations (single + collection), config detection, conversion utils, prefix constants, registered in factory discriminated union
  • GGUF loader fixes: Correct base class (ModelLoader), zero_cond_t=True, correct in_channels
  • Starter models: Lightning LoRAs (4-step, 8-step bf16), model bundle
  • transformers requirement: Downgraded back to >=4.56.0 (the video processor fallback imports already handle this)

Frontend Changes

  • Advanced settings: Component source selector (Diffusers model for GGUF), encoder quantization dropdown (None/int8/nf4), shift override checkbox+slider
  • Generation settings: Scheduler hidden for Qwen Image Edit
  • Graph builder: LoRA wiring via collection loader, reference image VAE encoding with pixel-space resize, shift/quantization passthrough
  • State management: qwenImageEditComponentSource, qwenImageEditQuantization, qwenImageEditShift in params slice with persistence and model-switch cleanup

Functional Testing Guide

1. Text-to-Image Generation (Basic)

  1. Install the "Qwen Image Edit 2511" Diffusers model from Starter Models
  2. Select it as the main model
  3. Enter a prompt (e.g., "Persian cat sitting on a red velvet cushion")
  4. Set Steps=30, CFG=4
  5. Generate — should produce a coherent image matching the prompt

2. GGUF Quantized Transformer

  1. Install a GGUF variant (e.g., Q4_K_M) from Starter Models
  2. Also install the full Diffusers model (needed as Component Source)
  3. Select the GGUF model as the main model
  4. In Advanced settings, set "Component Source (Diffusers)" to the full Diffusers model
  5. Generate with the same prompt — quality should be close to the full model

3. BitsAndBytes Encoder Quantization

  1. With the full Diffusers model selected, open Advanced settings
  2. Set "Encoder Quantization" to "4-bit (nf4)"
  3. Generate — should produce similar quality with reduced VRAM for the text encoder
  4. Test "8-bit (int8)" as well

4. LoRA Support

  1. Install "Qwen Image Edit Lightning (4-step, bf16)" from Starter Models
  2. Enable the LoRA in the LoRA panel with weight=1.0
  3. Set Steps=4, CFG=1
  4. In Advanced, check "Shift Override" and set to 3.0
  5. Generate — should produce a coherent image in 4 steps (~10x faster)
  6. Test with "8-step" variant (Steps=8, CFG=1, Shift=3)

5. Image Editing with Reference Image

  1. Add a Qwen Image Edit reference image in the reference images panel
  2. Set the canvas dimensions to match the reference image's aspect ratio
  3. Enter an edit prompt (e.g., "Change the background to a beach scene")
  4. Generate with Steps=30, CFG=1 (matching the diffusers pipeline default)
  5. The output should preserve the reference image's content while applying the edit
  6. Note: Edit quality varies by seed and prompt complexity — try different seeds if results are unsatisfactory

6. Multiple Reference Images

  1. Add 2+ reference images
  2. All images are sent to the text encoder for vision-language conditioning
  3. The first reference image is VAE-encoded as spatial conditioning for the transformer

7. Model Switching Cleanup

  1. Switch from Qwen Image Edit to another model type (e.g., FLUX)
  2. The component source, quantization, and shift settings should be cleared automatically
  3. Switch back — settings should need to be re-selected

🤖 Generated with Claude Code

@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Mar 25, 2026

@copilot When I attempt to run a generation with Qwen Image (Diffusers version). I get this stack trace:

[2026-03-24 23:25:33,875]::[InvokeAI]::ERROR --> Error while invoking session 58d5a101-921c-4f77-a1e3-2df705e3e458, invocation 90bb0da8-c432-4197-9b93-42e9a8fea02f (qwen_image_edit_text_encoder): Received a NoneType for argument image_processor, but a ImageProcessingMixin was expected.
[2026-03-24 23:25:33,875]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/services/session_processor/session_processor_default.py", line 130, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/baseinvocation.py", line 244, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/qwen_image_edit_text_encoder.py", line 56, in invoke
    conditioning_data = ConditioningFieldData(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/qwen_image_edit_text_encoder.py", line 94, in _encode
    text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/models/qwen2_vl/processing_qwen2_vl.py", line 91, in __init__
    super().__init__(image_processor, tokenizer, video_processor, chat_template=chat_template)
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 534, in __init__
    self.check_argument_for_proper_class(attribute_name, arg)
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 614, in check_argument_for_proper_class
    raise TypeError(
TypeError: Received a NoneType for argument image_processor, but a ImageProcessingMixin was expected.

Copy link
Copy Markdown

Copilot AI commented Mar 25, 2026

@copilot When I attempt to run a generation with Qwen Image (Diffusers version). I get this stack trace:

[2026-03-24 23:25:33,875]::[InvokeAI]::ERROR --> Error while invoking session 58d5a101-92...

The crash was caused by Qwen2VLProcessor(tokenizer=tokenizer, image_processor=None) in the fallback branch. The tokenizer submodel for this model is a Qwen2_5_VLProcessor (Qwen2.5-VL class), not Qwen2VLProcessor, so the isinstance check was failing and hitting the broken fallback.

Fixed in 666fdfa by replacing the broken isinstance check + manual constructor with AutoProcessor.from_pretrained() using the tokenizer submodel path — the same pattern used in llava_onevision_vllm.py. This correctly loads whichever processor class the model uses with all components (image_processor, tokenizer, video_processor) properly initialized.

@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Mar 25, 2026

@copilot Can you figure out where the WARNING in the fourth line of log output is coming from and remove it?
``
$ invokeai-web
[2026-03-25 09:49:24,865]::[InvokeAI]::INFO --> Using torch device: NVIDIA Graphics Device
[2026-03-25 09:49:24,869]::[InvokeAI]::INFO --> cuDNN version: 90701
[2026-03-25 09:49:25,315]::[InvokeAI]::INFO --> Patchmatch initialized
[2026-03-25 09:49:26,223]::[InvokeAI]::WARNING --> "default_factory" is not supported, calling it now to set "default"
[2026-03-25 09:49:26,692]::[InvokeAI]::INFO --> InvokeAI version 6.12.0
[2026-03-25 09:49:26,692]::[InvokeAI]::INFO --> Root directory = /home/lstein/invokeai-lstein
[2026-03-25 09:49:26,693]::[InvokeAI]::INFO --> Initializing in-memory database
[2026-03-25 09:49:26,694]::[InvokeAI]::INFO --> Database update needed
[2026-03-25 09:49:26,694]::[InvokeAI]::INFO --> Using in-memory database, no backup needed
[2026-03-25 09:49:26,698]::[InvokeAI]::INFO --> Removing models/.cache directory. Converted models will now be cached in .convert_cache.
[2026-03-25 09:49:26,698]::[InvokeAI]::INFO --> Removing legacy just-in-time models. Downloaded models will now be cached in .download_cache.
[2026-03-25 09:49:26,698]::[InvokeAI]::INFO --> Removing defunct core models.
[2026-03-25 09:49:26,701]::[InvokeAI]::INFO --> Removing UNIQUE(name, base, type) constraint from models table

Copy link
Copy Markdown

Copilot AI commented Mar 25, 2026

@copilot Can you figure out where the WARNING in the fourth line of log output is coming from and remove it?
``
$ invokeai-web
...

The warning came from invokeai/app/invocations/fields.py:744InputField() doesn't support default_factory and logs a warning when it receives one. The reference_images field in qwen_image_edit_text_encoder.py was using default_factory=list instead of default=[] (which is the pattern used everywhere else in the codebase). Fixed in 62aebab.

@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Mar 25, 2026

@copilot I'm getting the following error upon generating with the model:

ValueError: Unrecognized model in /home/lstein/invokeai-main/models/a28d6553-3e88-4d32-a006-24a5f3cc4144/tokenizer. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: aimv2, aimv2_vision_model, albert, align, altclip, apertus, arcee, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, bitnet, blenderbot, blenderbot-small, blip, blip-2, blip_2_qformer, bloom, blt, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, cohere2_vision, colpali, colqwen2, conditional_detr, convbert, convnext, convnextv2, cpmant, csm, ctrl, cvt, d_fine, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v2, deepseek_v3, deepseek_vl, deepseek_vl_hybrid, deformable_detr, deit, depth_anything, depth_pro, deta, detr, dia, diffllama, dinat, dinov2, dinov2_with_registers, dinov3_convnext, dinov3_vit, distilbert, doge, donut-swin, dots1, dpr, dpt, edgetam, edgetam_video, edgetam_vision_model, efficientformer, efficientloftr, efficientnet, electra, emu3, encodec, encoder-decoder, eomt, ernie, ernie4_5, ernie4_5_moe, ernie_m, esm, evolla, exaone4, falcon, falcon_h1, falcon_mamba, fastspeech2_conformer, fastspeech2_conformer_with_hifigan, flaubert, flava, flex_olmo, florence2, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, gemma3n, gemma3n_audio, gemma3n_text, gemma3n_vision, git, glm, glm4, glm4_moe, glm4v, glm4v_moe, glm4v_moe_text, glm4v_text, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gpt_oss, gptj, gptsan-japanese, granite, granite_speech, granitemoe, granitemoehybrid, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hgnet_v2, hiera, hubert, hunyuan_v1_dense, hunyuan_v1_moe, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, internvl, internvl_vision, jamba, janus, jetmoe, jukebox, kosmos-2, kosmos-2.5, kyutai_speech_to_text, layoutlm, layoutlmv2, layoutlmv3, led, levit, lfm2, lfm2_vl, lightglue, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longcat_flash, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, metaclip_2, mgp-str, mimi, minimax, ministral, mistral, mistral3, mixtral, mlcd, mllama, mm-grounding-dino, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, modernbert-decoder, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmo3, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, ovis2, owlv2, owlvit, paligemma, parakeet_ctc, parakeet_encoder, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, perception_encoder, perception_lm, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_omni, qwen2_5_vl, qwen2_5_vl_text, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen2_vl_text, qwen3, qwen3_moe, qwen3_next, qwen3_omni_moe, qwen3_vl, qwen3_vl_moe, qwen3_vl_moe_text, qwen3_vl_text, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam2, sam2_hiera_det_model, sam2_video, sam2_vision_model, sam_hq, sam_hq_vision_model, sam_vision_model, seamless_m4t, seamless_m4t_v2, seed_oss, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip2_vision_model, siglip_vision_model, smollm3, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, t5gemma, table-transformer, tapas, textnet, time_series_transformer, timesfm, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, vaultgemma, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, vjepa2, voxtral, voxtral_encoder, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xcodec, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xlstm, xmod, yolos, yoso, zamba, zamba2, zoedepth


Copy link
Copy Markdown

Copilot AI commented Mar 25, 2026

@copilot I'm getting the following error upon generating with the model:

ValueError: Unrecognized model in /home/lstein/invokeai-main/models/a28d6553-3e88-4d32-a006-24a5f3cc414...

Fixed in 8bbb06d. The tokenizer subdirectory contains tokenizer/preprocessor files but no config.json with model_type, which AutoProcessor requires to identify the processor class via AutoConfig. Switched to Qwen2_5_VLProcessor.from_pretrained directly — it loads from the tokenizer subdirectory without needing that type-detection step.

@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Mar 25, 2026

@copilot I'm still getting tokenizer errors:


Traceback (most recent call last):
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/services/session_processor/session_processor_default.py", line 130, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/baseinvocation.py", line 244, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/qwen_image_edit_text_encoder.py", line 53, in invoke
    prompt_embeds = self._encode(context, pil_images)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/qwen_image_edit_text_encoder.py", line 81, in _encode
    processor = Qwen2_5_VLProcessor.from_pretrained(str(tokenizer_abs_path), local_files_only=True)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 1394, in from_pretrained
    args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 1453, in _get_arguments_from_pretrained
    args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/models/auto/image_processing_auto.py", line 489, in from_pretrained
    raise initial_exception
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/models/auto/image_processing_auto.py", line 476, in from_pretrained
    config_dict, _ = ImageProcessingMixin.get_image_processor_dict(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/image_processing_base.py", line 361, in get_image_processor_dict
    raise OSError(
OSError: Can't load image processor for '/home/lstein/invokeai-main/models/a28d6553-3e88-4d32-a006-24a5f3cc4144/tokenizer'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/home/lstein/invokeai-main/models/a28d6553-3e88-4d32-a006-24a5f3cc4144/tokenizer' is the correct path to a directory containing a preprocessor_config.json file

Copy link
Copy Markdown

Copilot AI commented Mar 25, 2026

@copilot I'm still getting tokenizer errors:


...

Fixed in ab7e1ec. The tokenizer/ subdirectory only holds tokenizer files — preprocessor_config.json lives at the model root. Removed the / "tokenizer" suffix so Qwen2_5_VLProcessor.from_pretrained is now called on the root directory. It finds preprocessor_config.json (image processor) at the root and tokenizer files inside tokenizer/ automatically, while still skipping the model_type detection that AutoProcessor requires.

@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Mar 25, 2026

@copilot
The next error involves the image processor. Note that I have not loaded any reference images because the user interface is not letting me add reference images for this model type. Could you modify the UI to allow reference images to be uploaded? Please see the FLUX2 model support for an example of the way to do this.
Also, please handle the scenario in which the user invokes the model without providing any reference images; let's see if the model can perform txt2image without a reference image to work on.

[2026-03-25 11:13:46,158]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/image_processing_base.py", line 354, in get_image_processor_dict
    resolved_image_processor_file = resolved_image_processor_files[0]
                                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range

Copy link
Copy Markdown

Copilot AI commented Mar 25, 2026

@copilot
The next error involves the image processor. Note that I have not loaded any reference images because the user interface is not letting me add reference images for this mod...

Fixed in dec4a46.

Processor loading: Replaced Qwen2_5_VLProcessor.from_pretrained(root_dir) with a component-based approach that searches for preprocessor_config.json in three locations (tokenizer/, model root, image_processor/) and falls back to Qwen2_5_VLImageProcessor() defaults if not found. This works regardless of how the model was downloaded.

Reference images UI: Added qwen_image_edit_reference_image support following the FLUX2 pattern — clicking "Add Reference Image" when a Qwen model is selected now shows a reference image slot with just an image picker (no model selector needed). Reference images are collected and wired into the text encoder's reference_images input in the graph.

txt2img without reference images: No warning or block when no reference images are set — the text encoder works with an empty list, so txt2img works fine.

@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Mar 25, 2026

@copilot New error:

[2026-03-25 12:13:00,527]::[InvokeAI]::ERROR --> Error while invoking session aeec3262-439f-4507-9207-fbf879faae9d, invocation 46994740-f85e-4f29-bb6c-ededfd5c1e00 (qwen_image_edit_text_encoder): Received a NoneType for argument video_processor, but a BaseVideoProcessor was expected.
[2026-03-25 12:13:00,528]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/services/session_processor/session_processor_default.py", line 130, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/baseinvocation.py", line 244, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/qwen_image_edit_text_encoder.py", line 52, in invoke
    prompt_embeds = self._encode(context, pil_images)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/qwen_image_edit_text_encoder.py", line 101, in _encode
    processor = Qwen2_5_VLProcessor(tokenizer=tokenizer, image_processor=image_processor)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/processing_qwen2_5_vl.py", line 96, in __init__
    super().__init__(image_processor, tokenizer, video_processor, chat_template=chat_template)
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 534, in __init__
    self.check_argument_for_proper_class(attribute_name, arg)
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 614, in check_argument_for_proper_class
    raise TypeError(
TypeError: Received a NoneType for argument video_processor, but a BaseVideoProcessor was expected.

lstein and others added 7 commits March 29, 2026 22:09
The negative conditioning node was always added to the graph, causing
the text encoder to be loaded twice even when CFG=1 (where the negative
prediction is unused). Now only adds the negative node when cfg_scale > 1.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Kohya LoRAs use underscore-separated keys like
lora_unet_transformer_blocks_0_attn_to_k.lokr_w1 instead of the
diffusers dot-separated format. Add:

- Kohya key detection (lora_unet_transformer_blocks_*)
- Key conversion mapping from Kohya underscores to model dot-paths
- Updated LoRA config detection to recognize Kohya format + LoKR suffixes
- Flux Kohya exclusion (lora_unet_double_blocks, lora_unet_single_blocks)
- Test model for Kohya LoKR identification

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Float16 noise has limited precision that creates quantization patterns
visible as vertical ripple artifacts, especially with few-step Lightning
LoRA generation where the denoiser doesn't have enough steps to smooth
them out. Use float32 (matching Z-Image and the diffusers pipeline).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Apr 4, 2026

@copilot
An external reviewer has recommended adding the following regression tests. They seem like reasonable requests to me. Could you review these comments, and if there are no issues, go ahead and implement them?


Here are the comments:

  1. Qwen Kohya LoRA identification test - Add to a model-identification test module that already scans tests/model_identification/stripped_models/.

    What it should assert:

    • The new fixture qwen_image_kohya_lokr_test.safetensors is identified as:
      • type == LoRA
      • format == LyCORIS
      • base == qwen-image

    Why:

    • The new commit adds only the stripped fixture plus detection logic in invokeai/backend/model_manager/configs/lora.py. That path should be exercised explicitly.

    How to wire it:

    • Extend the existing model-identification parametrized test table to include:
      • tests/model_identification/stripped_models/f9f3c9fa-9449-4f90-996e-ea6be6b7d233/qwen_image_kohya_lokr_test.safetensors
  2. Pure conversion test for Kohya Qwen keys - Add a new backend unit test file, for example:

    • tests/backend/patches/test_qwen_image_lora_conversion_utils.py

    What it should assert:

    • A minimal Kohya-format state dict like:
      • lora_unet_transformer_blocks_0_attn_to_k.lokr_w1
      • lora_unet_transformer_blocks_0_attn_to_k.lokr_w2
    • converts into a ModelPatchRaw with a layer key:
      • transformer.transformer_blocks.0.attn.to_k

    Why:

    • This is the actual new functionality in invokeai/backend/patches/lora_conversions/qwen_image_lora_conversion_utils.py.

    How to wire it:

    • Import lora_model_from_qwen_image_state_dict.
    • Build a tiny synthetic state dict with small tensors.
    • Assert model.layers.keys() exactly matches the expected prefixed module path.
  3. Unknown Kohya submodule behavior test - Same file as above.

    What it should assert:

    • A state dict with an unrecognized Kohya key like:
      • lora_unet_transformer_blocks_0_unknown_projection.lokr_w1
    • does not silently produce a partially empty patch set unnoticed.

    Preferred assertion:

    • Either raise ValueError, or if the intended behavior is skip, assert the resulting ModelPatchRaw.layers is empty and document that explicitly.

    Why:

    • Right now, unknown layers are skipped silently in invokeai/backend/patches/lora_conversions/qwen_image_lora_conversion_utils.py:115. That should be locked down by a test.

    How to wire it:

    • Add a separate test case next to the successful conversion case.
    • If behavior stays "skip", assert exactly zero converted layers.
  4. LoRA loader integration test for Qwen Kohya - Add to an existing loader-focused backend test module, or create:

    • tests/backend/model_manager/load/test_lora_loader_qwen_image.py

    What it should assert:

    • With base=qwen-image and format=LyCORIS, the LoRA loader routes the loaded state dict through

    How to wire it:

    • Mock load_file() to return a synthetic Kohya-format state dict.
    • Instantiate LoRALoader with a minimal fake config whose base is QwenImage and format is LyCORIS.
    • Call _load_model() and assert on the result.
  5. Qwen graph-builder CFG gating test - Add a frontend unit test, likely under:

    • invokeai/frontend/web/src/features/nodes/util/graph/generation/buildQwenImageGraph.test.ts

    What it should assert:

    • When cfgScale <= 1, the built graph does not include neg_prompt and does not wire negative_conditioning.
    • When cfgScale > 1, it does.

    Why:

    • That behavior was added recently in invokeai/frontend/web/src/features/nodes/util/graph/generation/buildQwenImageGraph.ts.

    How to wire it:

    • Build the graph with a minimal mocked GraphBuilderArg.
    • Inspect node ids/types and edges in the returned graph object.
    • No UI rendering needed.

lstein and others added 8 commits April 4, 2026 11:24
Test 1: Model identification (stripped model fixture)
- Kohya LoKR format LoRA detected as type=lora, format=lycoris, base=qwen-image

Test 2: Conversion unit tests (12 tests)
- Format detection: Kohya vs diffusers vs empty state dicts
- Kohya key conversion: attention, MLP, unknown, non-matching keys
- Full model conversion: Kohya and diffusers produce correct prefixed layer keys
- Prefix stripping: "transformer." prefix handled correctly

Test 3: Unknown key behavior tests
- Unknown Kohya sub-modules are silently skipped (1 known + 1 unknown = 1 layer)
- All-unknown state dict produces empty ModelPatchRaw (0 layers)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@lstein lstein force-pushed the feat/qwen-image-edit-2511 branch from ba333f3 to e806505 Compare April 5, 2026 17:35
…raph builder

chore(backend): add test for heuristic detection of Qwen Image Edit GGUF model variant

chore(frontend): add regression test for ref images not added to qwen image in generate mode

fix(frontend): graph build handling of Qwen Image when CFG <=1

chore(frontend): add regression test for optimal dimension selection
@lstein lstein force-pushed the feat/qwen-image-edit-2511 branch from a9252c3 to c2d7b8b Compare April 5, 2026 17:56
@lstein lstein force-pushed the feat/qwen-image-edit-2511 branch from e2710c4 to b7de7d7 Compare April 5, 2026 22:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants