Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Local llm set up

#1
I bought a zettlabs D6 AI NAS
What I want to do with it
1/ Have file in the NAS which would consist of 50 PDF's about 76mb
2/ write a prompt in Anything LLM that only uses these PDF's to generate its answer

The introductions I have been given gives me a set up in the Docker app that when done gives meca url to use to have this task done

This has never worked
Happy to pay someone for possibly a quick job via TeamViewer
Reply
#2
In your case, the issue is almost certainly with the Docker volume mapping or the model path in the container. Anything LLM needs to be pointed to both the local folder containing your PDFs (for ingestion) and the model directory (for running the LLM). If either of those is wrong, you’ll get the URL but no usable response.

Here’s what usually works:
1. In Docker, when setting up the Anything LLM container, make sure your PDFs are in a NAS folder like /share/AI_docs and then map that to /app/docs inside the container.
2. In the Anything LLM web interface, go to the “Sources” tab and select /app/docs as your data directory.
3. Make sure your model (e.g., mistral, llama3, or similar) is downloaded and referenced correctly in the container or environment variables.

If the container still doesn’t respond after setup, you can check the logs in Docker — it will usually show whether the model failed to load or if the documents weren’t indexed.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)