Skip to content

Conversation

dineshreddy91
Copy link
Contributor

add vllm support

needs a model to be served using vllm serve model_tag

@spillai
Copy link
Contributor

spillai commented Feb 27, 2025

Can you add another function with a simple call, instead of a full benchmark. See test_instructor.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants