Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Local llm set up

#2
In your case, the issue is almost certainly with the Docker volume mapping or the model path in the container. Anything LLM needs to be pointed to both the local folder containing your PDFs (for ingestion) and the model directory (for running the LLM). If either of those is wrong, you’ll get the URL but no usable response.

Here’s what usually works:
1. In Docker, when setting up the Anything LLM container, make sure your PDFs are in a NAS folder like /share/AI_docs and then map that to /app/docs inside the container.
2. In the Anything LLM web interface, go to the “Sources” tab and select /app/docs as your data directory.
3. Make sure your model (e.g., mistral, llama3, or similar) is downloaded and referenced correctly in the container or environment variables.

If the container still doesn’t respond after setup, you can check the logs in Docker — it will usually show whether the model failed to load or if the documents weren’t indexed.
Reply


Messages In This Thread
Local llm set up - by ENQUIRIES - Yesterday, 11:03 AM
RE: Local llm set up - by ed - Yesterday, 12:26 PM

Forum Jump:


Users browsing this thread: 2 Guest(s)