LM Studio Experiments

I have been experimenting with LM Studio over the last few days. This is a tool that allows me to run large language models locally on my computer, making it ideal for when I’m travelling or spending time somewhere with a slow or unreliable internet connection. It’s cross-platform (Windows, Silicon Macs and Linux), and has so far worked on every machine I’ve tried it on, although not always with the default settings.

There are a number of different runtimes and language models to choose from, with the smallest ones working on a laptop with 8Gb of RAM, and the large ones requiring a very powerful computer to get the best out of them. I’ver tried them on my work and home computers to do the sort of thing I use Copilot and ChatGPT for, and the experience is comparable, especially when using some of the larger models.

Ideally the software wants a decent graphics card and lots of RAM. The latter I can provide it with, but on my desktop computer it initially failed to work because it was trying to use my old and slow graphics card which is nowhere near good enough for this kind of tool. In this scenario it was just a case of telling it use a different runtime (CPU llama.cpp), and then everything started working as expected.

It’s also worth mentioning that the Linux version ships as an AppImage, which means it needs Fuse v2 to run. Newer distros will need to install libfuse2t64 to get it to run in the first place, but after that everything should just work.

In a world where AI is the future, but privacy is important as well, I think tools like this are definitely worth spending some time with.