Replies: 1 comment
-
|
Yep definitely open to pull requests for new features, although the default install can't have any additional dependencies so any feature that requires it would need to be added as an extension. I'm ok with adding any additional hooks or necessary features to main.py in order to support the use-case. It's also possible to implement the core streaming feature into main.py, but then allow for an extension to markup the results. Another way to get around dependencies is calling a tool if it exists for that functionality like the built-in Agent Browser extension does which is only enabled if A lot of the features rely on chat request filters that intercept completed requests or tool calls to implement its functionality, this wouldn't work for streaming since the requests need to be completed, so I'm guessing additional progress hooks/filters would need to be added for streaming requests to notify the CLI/UI of interim partial responses as they're received. I've got other stuff to work on, so I'll let you look into how best to add it. Feel free to discuss the approaches you want to consider implementing or if you have a spike or prototype that you want me to look at etc. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi there!
I really like the look of the webui. I was wondering if you'd be open to adding streaming support for the model responses from openai compatible endpoints (i.e llama.cpp hosted on a box in your house while you're using your laptop)
I haven't really looked at the code yet beyond asking deepwiki about the underlying support, so I thought I would ask before I started looking into it really hard.
I think its pretty slick that you only have one dependency, my only other desire would be first class CLI support for streaming responses with pretty printing the markdown ala
richortextual.Let me know what you think! Thanks again
-- jwin
Beta Was this translation helpful? Give feedback.
All reactions