Open
Description
I'm using asyncio methods to make concurrent requests to vllm server which in turn should batch these requests (continuous batching). however looking at the logs, vllm pass each request alone to the llm.
note: I can't use multi threading requests, I'm limited to use single thread
here is my code:
async def llm_call(prompt_template, engine,parser, kwg):
raw_output = await (prompt_template | engine).invoke(kwg)
output = parser.invoke(raw_output)
return output
async def llm_concurrent_requests(inputs,prompt_template, engine,parser):
tasks = []
# 5 to test
for record in (inputs[:5]):
tasks.append(llm_call(
prompt_template=prompt_template,
engine=engine,
parser=parser,
kwg = record
)
)
results = await asyncio.gather(*tasks)
return results
async def main(inputs):
engine = ChatOpenAI(**CONFIG)
prompt_template = ChatPromptTemplate.from_messages([("user", PROMPT)] )
parser = JsonOutputParser(pydantic_object=Response)
responses = await llm_concurrent_requests(inputs = inputs,prompt_template = prompt_template, engine = engine,parser = parser)
if __name__ == "__main__":
#....
asyncio.run(main(inputs))

Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.