Cant Run VLLM using https://github.com/vllm-project/vllm/blob/main/benchmarks/README.md #15464
SSWAMIN1SSS
announced in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Installed VLLM from source : https://github.com/vllm-project/vllm/blob/main/README.md#getting-started
Tried running Benchmark offline by following the link : https://github.com/vllm-project/vllm/blob/main/benchmarks/README.md#example---offline-throughput-benchmark .
Even after installing datasets, i was getting
Traceback (most recent call last):
File "vllm/benchmarks/benchmark_throughput.py", line 608, in
main(args)
File "vllm/benchmarks/benchmark_throughput.py", line 362, in main
elapsed_time, request_outputs = run_vllm(
File "vllm/benchmarks/benchmark_throughput.py", line 39, in run_vllm
llm = LLM(**dataclasses.asdict(engine_args))
File ".local/lib/python3.10/site-packages/vllm/utils.py", line 1031, in inner
return fn(*args, **kwargs)
File ".local/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 242, in init
self.llm_engine = LLMEngine.from_engine_args(
File ".local/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 513, in from_engine_args
vllm_config = engine_args.create_engine_config(usage_context)
File ".local/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1205, in create_engine_config
device_config = DeviceConfig(device=self.device)
File ".local/lib/python3.10/site-packages/vllm/config.py", line 1798, in init
raise RuntimeError("Failed to infer device type")
RuntimeError: Failed to infer device type
Can anyone help me in resolving it
Beta Was this translation helpful? Give feedback.
All reactions