COMPLETED UPDATES
7/29/25
- Update 4: Version 1.2.4
This update introduces a more organized and powerful configuration experience, adds dedicated settings for Kontext editing, and refines the user interface.
-
Tabbed Bot Settings in Configurator: The "Bot Settings" tab in the GUI has been reorganized into five distinct sub-tabs for clarity:
- General: Core model selections, VAE, variation settings, batch sizes, MP size, and upscale factor.
- Flux: Default style, steps, guidance specifically for Flux generations.
- SDXL: Default style, steps, guidance, and negative prompt for SDXL generations.
- Kontext: New dedicated settings for the
/editcommand, including default steps, guidance, and MP size. - LLM: Settings for the prompt enhancer and display preferences. Added support for OpenAI oss models and all other thinking models via Groq. Added support for OpenAI GPT-5.
-
Model-Specific Default Styles: The single
default_stylesetting has been split. You can now set adefault_style_fluxand adefault_style_sdxlindependently in the configurator. The bot will automatically apply the correct default style based on the model being used. -
Advanced MP Size Control: In the configurator, the "Default MP Size" fields are now float values instead of a dropdown, allowing for more precise control over generation resolution (e.g.,
1.15). The/settingscommand in Discord retains the user-friendly dropdown. -
Variation Batch Size: A new setting for "Variation Batch Size" has been added for future features.
-
UI & Bug Fixes:
- The variation buttons on generation results have been refined. Single-image results (like variations or reruns) now correctly show distinct
Vary W 🤏andVary S 💪buttons. Multi-image (batch) results correctly show simplifiedV1,V2, etc., which dynamically use your current default variation setting. - Fixed a bug where the
MPvalue was not displayed correctly in the final completion message. - Fixed a visual bug in the configurator where the Bot Settings page would not refresh correctly after being changed via the
/settingscommand in Discord. - The file scanner now correctly recognizes completed Kontext jobs (prefixed with
EDIT_).
- The variation buttons on generation results have been refined. Single-image results (like variations or reruns) now correctly show distinct
7/6/25
- Update 3:
- fixed a few bugs, official V1.2.3 update now has all features working with no bugs, as of today this will be stable release 1. Update tool will pull updates on the next numbered release (1.2.4).
7/4/25
- Update 2:
- added auto update feature so the app and repo stay in sync without needing to download the zip over and over as I add features. The automatic update on startup can be turned on and off in
main config>app settings. Updates can also be triggered manually viaTools>Update Application. - added sub tabs to the LLM prompts so that its easier to manage.
- added auto update feature so the app and repo stay in sync without needing to download the zip over and over as I add features. The automatic update on startup can be turned on and off in
7/2/25
- Update 1: Integrated FLUX Kontext for powerful instruction-based image editing and stitching via the new
/editcommand. The LLM Enhancer now supports multi-modal vision to better interpret edit instructions. - Hotfix 1: Fixed a job cancellation issue, improved first-time setup with venv creation, enhanced LLM support for new models, and added a tool to pull complete LLM model lists from the GUI.
KNOWN BUGS
- None(?)
PLANED UPDATES
- New GUI
- Metadata
on/offtoggle for all jobs
![]() |
![]() |
![]() |
![]() |
(IF YOU DO NOT HAVE A DISCORD BOT ACCOUNT ALREADY CREATED, OPEN "HOW TO DISCORD BOT.txt" AND FOLLOW INSTRUCTIONS)
Make sure you already have ComfyUI installed. If you need ComfyUI still, you can download it HERE.
Download the portable zip from this repository via the "Code" button at the top of the page, or click HERE to download the zip directly. I recommend placing it in your ComfyUI Portable folder, but you can put it anywhere and it should work just fine.
- The bot requires specific custom nodes. You can install them manually by cloning the following GitHub repositories into your
ComfyUI/custom_nodesfolder, or by using the Configurator's "Install/Update Custom Nodes" tool (under the "Tools" menu):https://github.com/rgthree/rgthree-comfy.githttps://github.com/ssitu/ComfyUI_UltimateSDUpscale.githttps://github.com/jamesWalker55/comfyui-various.githttps://github.com/city96/ComfyUI-GGUF.githttps://github.com/tsogzark/ComfyUI-load-image-from-url.githttps://github.com/BobsBlazed/Bobs_Latent_Optimizer.githttps://github.com/Tenos-ai/Tenos-Resize-to-1-M-Pixels.git
- Qwen Image: Download the latest
qwen_imagecheckpoint bundle from the official Qwen Image release on Hugging Face and place the.safetensorsfile inside your configuredComfyUI/models/checkpointsdirectory. Qwen workflows rely on the built-inModelSamplingAuraFlownode, which ships with current ComfyUI builds—update ComfyUI if the node is missing. - WAN 2.2: Install the WAN 2.2 text-to-image and image-to-video checkpoints (WAN reports them as
wan2.2.safetensorsandwan2.2-video.ckptrespectively) into the same checkpoints folder so the bot can resolve them via/settings. WAN workflows require the upstreamModelSamplingSD3node that is bundled with the official WAN ComfyUI release; pull the latest WAN custom node package or sync from their GitHub repository to ensure the sampler is available. - Shared Tenos nodes: Every Qwen and WAN workflow still injects Bob's Latent Optimizer Advanced and the Tenos Resize node to keep prompt handling consistent with Flux/SDXL. Confirm that both custom nodes listed above load without errors before running generations.
- WAN animation flow: Any WAN generation queued through the bot surfaces a
🎞️follow-up button in Discord. This lets you pass the freshly generated image into WAN's image-to-video/text-to-video workflow without leaving the bot, ensuring a one-click path from still image to animation.
Before using Tenosai-Bot, you MUST run the configurator (TENOSAI-BOT.bat or by executing python config-editor-script.py) to:
- Map all necessary file paths (Outputs, Models, CLIPs, LoRAs, Custom Nodes) under the "Main Config" tab.
- Input your unique Discord bot token into the
BOT_API->KEYfield in "Main Config". - Input the admin's Discord User ID into the
ADMIN->IDfield in "Main Config". - Optionally input API Keys for Google Gemini, Groq, and/or OpenAI in the
LLM_ENHANCERsection of "Main Config" if you plan to use the LLM Prompt Enhancer feature.
This step is crucial for the bot to function correctly. After initial setup, use the "Bot Control" tab in the configurator to start the bot. The bot uses the model selected via /settings or the Configurator as the default for new generations.
OPTIONAL: Download the Tenos Official Flux Dev Finetune from Huggingface HERE
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Commands: /gen or /please
Usage: /gen [prompt] [options] or /please [prompt] [options]
Options:
--seed [number]: Set a specific seed for reproducibility.--g [number]: Set guidance scale for Flux models (e.g.,3.5). Default in/settings.--g_sdxl [number]: Set guidance scale for SDXL models (e.g.,7.0). Default in/settings.--ar [W:H]: Set aspect ratio (e.g.,--ar 16:9). Default is1:1.--mp [M]: (Flux & SDXL) Set Megapixel target size (e.g.,0.5,1,1.75). Default in/settings.--img [strength] [URL]: (Flux Only) Use img2img. StrengthS(0-100),URLof input image.--style [style_name]: Apply a predefined LoRA style (see/styles). Default in/settings.--r [N]: Run the promptNtimes with different seeds (max 10).--no "[negative_prompt_text]": (SDXL Only) Provide a negative prompt.
Example: /gen a majestic lion --ar 16:9 --seed 1234 --style realistic --g_sdxl 6.5
Example (SDXL with custom negative): /gen cyberpunk city --no "trees, nature, day"
LLM Prompt Enhancer:
- An admin can enable a prompt enhancer via
/settings. - If enabled, your initial prompt may be rewritten by the selected LLM (Google Gemini, Groq, or OpenAI) to be more descriptive. This applies to
/genand/editcommands. - Generated messages will have a ✨ icon if the enhancer was used successfully.
- Edit the system prompts used by the enhancer via the Configurator's "LLM Prompts" tab.
Command: /edit
Usage: /edit [instruction] [image1] [image2] [image3] [image4] [options]
The edit modal now lets you switch between the original Flux Kontext workflow and the official Qwen Image Edit graph. Kontext continues to support up to 4 images with multi-image blending, while Qwen Image Edit focuses on single-image edits that lean on AuraFlow guidance for sharper, instruction-following results. Pick the mode that best suits your task before submitting the job.
Options:
--g [number]: Set guidance scale for the edit (e.g.,3.0).--ar [W:H]: Set aspect ratio for the final output canvas.--steps [number]: Set the number of steps for the generation.
Example: /edit instruction:make the cat wear a wizard hat image1:<upload_cat_image>
Example: /edit instruction:blend these two styles image1:<upload_style1_image> image2:<upload_style2_image> --ar 16:9
Command: Reply with --up or click the Upscale ⬆️ button.
Usage: Reply to a generated image with --up [options] or click the button.
Options (for reply command):
--seed [number]: Set a specific seed for the upscale.--style [style_name]: Apply a different style during the upscale process.
Example (replying to an image): --up --seed 5678 --style detailed
Command: Reply with --vary [type] or click the Vary W 🤏 / Vary S 💪 buttons.
Usage: Reply to a generated image with --vary [type] [options] or click the button.
Types:
w: Weak variation (subtle changes, lower denoise)s: Strong variation (significant changes, higher denoise)
Options (for reply command):
--prompt "[new_prompt]": If Remix Mode is ON (via/settings), use this new prompt for the variation.--no "[negative_prompt_text]": (SDXL Variation Only) Sets/replaces the negative prompt for this variation.--style [style_name]: Apply a different style to the variation.
Remix Mode:
- If "Variation Remix Mode" is ON (via
/settings), clicking a Vary button (🤏/💪) will open a modal to edit the prompt before generating.
- Rerun 🔄: Reruns the original generation prompt and parameters with a new seed.
- Edit ✏️: Opens a modal to perform a new Kontext Edit on the selected image(s). You can provide new instructions and even add more images to blend.
/styles: View available style presets via DM./ping: Check bot latency./help: Show this help information.
- Delete 🗑️ / Reply
--delete: (Admin/Owner only) Deletes generated image file(s) AND the Discord message. - React with 🗑️: (Admin/Owner only) Same as the Delete button.
- Reply
--remove: (Admin/Owner only) Removes the Discord message only (files remain). - Cancel ⏸️: (Admin/Owner of job only) Appears on queued messages to cancel the job in ComfyUI.
--show(Reply): (Admin only) Get a DM with the full prompt string used for a generation.
/settings: Configure default models (Flux, SDXL, Kontext), CLIPs, generation parameters, LLM enhancer, and more./sheet [src]: Queue prompts from a TSV file (URL or Discord Message ID/Link)./clear: Clear the ComfyUI processing queue./models: List models available to ComfyUI via DM.
The Configurator Tool (TENOSAI-BOT.bat or python config-editor-script.py) allows admins to:
- Main Config: Update all critical paths, Bot Token, Admin ID, and LLM API Keys.
- Bot Settings: Set global defaults for all generation parameters (mirrors
/settings). - LoRA Styles: Create, edit, and favorite LoRA style presets.
- Favorites: Mark favorite Models, CLIPs, and Styles for easier selection in menus.
- LLM Prompts: Edit the powerful system prompts used by the LLM Enhancer.
- Bot Control: Start/Stop the bot script and view its live log output.
- Tools Menu: Install/Update Custom Nodes, Scan for new models/checkpoints/CLIPs, and refresh the LLM models list.
Important Notes:
- Changes made in the Configurator (especially paths and API keys) require restarting the bot script to take effect (use the "Bot Control" tab).
- The bot distinguishes between Flux and SDXL workflows based on the model selected in
/settings. Ensure your selected model has the correct prefix (e.g., "Flux: model.gguf" or "SDXL: checkpoint.safetensors").
Enjoy creating! ❤️ BobsBlazed @Tenos.ai









