Post History

Current version by Nick Antonaccio

Current VersionMay 09, 2026 at 14:06

Using LM Studio on the GX10 has really nudged me to use the LM Link feature, which allows it to be accessed as a secure API server on any remote machine which also has LM Studio installed (without having to forward any ports on your router). LM link lets me use all the models on my GX10, with inference processed by the GX10's GPU), and results delivered on any other machine that has LM Studio installed.

To be clear, when you run Pi, for example, on several remote machines, and each of those PI instances connects to the API server in the LM Studio instance installed on each of those individual machines. The model list shown in each of those machine's LM Studio instances (connected to your LM Link account) includes all the models on the GX10 - those models on the GX10 appear as if they're installed directly in the local instance of LM Studio (so Pi connects to the API served by the local LM Studio instance and the inference runs on the GX10). That's pretty slick.

UPDATE: I've been running LM Link with one of the Strix Halo machines acting as server, and that's also working reliably.

Previous Versions
Version 3May 09, 2026 at 14:06

Using LM Studio on the GX10 has really nudged me to use the LM Link feature, which allows it to be accessed as a secure API server on any remote machine which also has LM Studio installed (without having to forward any ports on your router). LM link lets me use all the models on my GX10, with inference processed by the GX10's GPU), and results delivered on any other machine that has LM Studio installed.

To be clear, when you run Pi, for example, on several remote machines, and each of those PI instances connects to the API server in the LM Studio instance installed on each of those individual machines. The model list shown in each of those machine's LM Studio instances (connected to your LM Link account) includes all the models on the GX10 - those models on the GX10 appear as if they're installed directly in the local instance of LM Studio (so Pi connects to the API served by the local LM Studio instance and the inference runs on the GX10). That's pretty slick.

UPDATE: I've been running LM Link on one of the Strix Halo machines, and it's working reliably there too.

Version 2May 09, 2026 at 14:05

Using LM Studio on this machine has really nudged me to use the LM Link feature, which allows the GX10 to be accessed as a secure API server on any remote machine that has LM Studio installed (without having to forward any ports on your router). LM link lets me use all the models on my GX10 ( processed by the GX10's GPU), to perform inference on any other machine that has LM Studio installed. For example, I can run Pi on every machine, and PI connects to the API server in the LM Studio instance installed on each of those individual machines. The models shown in each of those machine's LM Studio model list include all the models on the GX10 (those models on the GX10 appear as if they're installed directly in the local instance of LM Studio), so Pi connects to the API served by the local LM Studio instance and the inference runs on the GX10). That's pretty slick.

Version 1May 09, 2026 at 02:14

Using LM Studio on this machine has really nudged me to use the LM Link feature, which allows the GX10 to accessed as a secure API server on any remote machine (no messing with forwarding ports on your router). So now I can use all the models on my GX10, and the GX10's GPU, to perform inference on any machine that has LM Studio installed. For example, I run Pi on every machine, and PI connects to the API server in the LM Studio instance installed on each of those individual machines. The models shown in each of those machines include all the models on the GX10 (those model on the GX10 appear as if they're installed directly in the local instance of LM Studio), so Pi connects to the API served by the local LM Studio instance and the inference runs on the GX10). That's pretty slick.