This package contains the LangChain integration with DigitalOcean
pip install -U langchain-gradient
And you should configure credentials by setting the DIGITALOCEAN_INFERENCE_KEY
environment variable:
- Log in to the DigitalOcean Cloud console
- Go to the Gradient Platform and navigate to Serverless Inference.
- Click on Create model access key, enter a name, and create the key.
- Use the generated key as your
DIGITALOCEAN_INFERENCE_KEY
:
Create .env file with your access key:
DIGITALOCEAN_INFERENCE_KEY=your_access_key_here
ChatGradient
class exposes chat models from langchain-gradient.
import os
from dotenv import load_dotenv
from langchain_gradient import ChatGradient
load_dotenv()
llm = ChatGradient(
model="llama3.3-70b-instruct",
api_key=os.getenv("DIGITALOCEAN_INFERENCE_KEY")
)
result = llm.invoke("What is the capital of France?.")
print(result)
import os
from dotenv import load_dotenv
from langchain_gradient import ChatGradient
load_dotenv()
llm = ChatGradient(
model="llama3.3-70b-instruct",
api_key=os.getenv("DIGITALOCEAN_INFERENCE_KEY")
)
for chunk in llm.stream("Tell me what happened to the Dinosaurs?"):
print(chunk.content, end="", flush=True)
More features coming soon.