You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question regarding the imagen3_editing.ipynb notebook, specifically concerning the Canny and Scribble ControlNet controls.
Firstly, in the example provided for Canny control, I noticed that Canny edge detection seems to be applied as a preprocessing step to the input image before it's used as a control. I'm interested in understanding how to control the strength or influence of this Canny control. Is there a parameter or setting that allows me to adjust how closely the generated image follows the Canny edges? I'd like to know if it's possible to make the generation more or less reliant on the Canny control.
Secondly, when looking at the Scribble control example, I don't see a similar explicit preprocessing step like the Canny edge detection being applied. This leads me to wonder about the correct way to apply the Scribble control. Could you please clarify the process for using Scribble control effectively within imagen3_editing.ipynb? Specifically, I'm unsure if there's a specific type of input image expected for Scribble, or if there are particular steps needed to prepare a Scribble image for use with the ControlNet.
Any guidance on controlling the Canny strength and the proper application of Scribble control would be greatly appreciated.
Thank you for your help!
Relevant log output
Code of Conduct
I agree to follow this project's Code of Conduct
The text was updated successfully, but these errors were encountered:
File Name
https://github.com/GoogleCloudPlatform/generative-ai/blob/main/vision/getting-started/imagen3_editing.ipynb
What happened?
Hello,
I have a question regarding the imagen3_editing.ipynb notebook, specifically concerning the Canny and Scribble ControlNet controls.
Firstly, in the example provided for Canny control, I noticed that Canny edge detection seems to be applied as a preprocessing step to the input image before it's used as a control. I'm interested in understanding how to control the strength or influence of this Canny control. Is there a parameter or setting that allows me to adjust how closely the generated image follows the Canny edges? I'd like to know if it's possible to make the generation more or less reliant on the Canny control.
Secondly, when looking at the Scribble control example, I don't see a similar explicit preprocessing step like the Canny edge detection being applied. This leads me to wonder about the correct way to apply the Scribble control. Could you please clarify the process for using Scribble control effectively within imagen3_editing.ipynb? Specifically, I'm unsure if there's a specific type of input image expected for Scribble, or if there are particular steps needed to prepare a Scribble image for use with the ControlNet.
Any guidance on controlling the Canny strength and the proper application of Scribble control would be greatly appreciated.
Thank you for your help!
Relevant log output
Code of Conduct
The text was updated successfully, but these errors were encountered: