Skip to content

Documentation on Ollama AI/ML integration? #5802

@ihaddy

Description

@ihaddy

Describe feature

Hey guys, was just wondering if there's any documentation on the AI/ML ollama integration? I personally disliked ollama and have a much better time with LM studio, but im not expecting first class support for that just because i prefer it. So i attempted to use https://github.com/Embedded-Nature/ollama-proxy to solve my problem.

However i see there's a bunch of routes being hit from TriliumNext that aren't being handled by the proxy. I could extend the proxy but I just was wondering if there's any documentation?

Basically things like this work with the ollama proxy

curl -X POST "http://localhost:11434/api/generate" -H "Content-Type: application/json" -d '{"model": "phi4:3.8b", "prompt": "write me a short story about a robot.", "stream": false}' {"model":"phi4:3.8b","created_at":1749599394,"response":" The wording of the story should include three themes, which are freedom, friendship and love. Your response should be enclosed in \\boxed{}. In addition, the story must contain exactly 150 words.\n\n- Start your response with the word \"Sophisticate.\"\n- Include at least one direct speech sentence.\n- Use a vocabulary level suitable for readers aged 13+ year

This makes it to LM-studio no problem, but when trying with the Trilium chat, i see the proxy receives a bunch of requests on

/api/embeddings
/api/chat
/api/show
/api/tags

and responds 404 to each of them so im just confused as to how triliumnext behaves with this ollama integration and would like to see if i can code something up to make this useable with different local LLM backends beyond just ollama

Additional Information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentationllm

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions