![]() |
Minisforum AI X1 Pro - Printable Version +- ASK NC (https://ask.nascompares.com) +-- Forum: Q&A (https://ask.nascompares.com/forumdisplay.php?fid=1) +--- Forum: Before you buy Q&A (https://ask.nascompares.com/forumdisplay.php?fid=2) +--- Thread: Minisforum AI X1 Pro (/showthread.php?tid=11831) |
Minisforum AI X1 Pro - Enquiries - 03-20-2025 Before I buy this system, I need to know for sure that other LLMs will run fully compatible - and your article says: - ""Only Features Microsoft Co-Pilot out the box – Hard/impossible to easily switch to ChatGPT or DeepSeek etc"" - if you do not know about this, please give me a contact to find out for sure... thank you RE: Minisforum AI X1 Pro - jeremyclark - 03-20-2025 Great question! I totally get wanting to nail down compatibility before dropping cash on a system like the Minisforum AI X1 Pro. The line you quoted—about it only featuring Microsoft Copilot out of the box and being hard to switch to other LLMs like ChatGPT or DeepSeek—seems to come from a review or article, right? I’ll break this down based on what I know about the X1 Pro and how LLMs typically work on hardware like this. First off, the “Microsoft Copilot out the box” bit just means it’s preconfigured with Windows 11 and has Copilot integrated (it’s even got a dedicated Copilot key). That’s a marketing thing tied to the AMD Ryzen AI 9 HX 370 processor and Microsoft’s push for AI PCs. It doesn’t mean the system is locked to Copilot or can’t run other LLMs—it’s more about what’s ready to go when you boot it up. Now, about running other large language models like ChatGPT or DeepSeek: the X1 Pro is a beefy little machine (up to 96GB RAM, NVMe SSDs, and that Ryzen AI chip with an NPU). Hardware-wise, it’s more than capable of handling local LLMs, including open-source ones like DeepSeek-R1 or distilled models (e.g., 7B or 14B parameter versions). The catch isn’t the hardware—it’s the software setup and how you want to use them. Here’s the deal:
RE: Minisforum AI X1 Pro - ed - 03-21-2025 Hi, thanks for your message! To run other LLMs like ChatGPT or DeepSeek locally, you'll need a decent dedicated GPU (e.g. RTX 3060 or better), while CPU power is less critical for most models. |