I recently purchased a new M4 Macbook Pro and was keen to see how well it performs. What better way then trying to run a local AI language model. While there are different ways to deploy LLMs, this post focuses on the quick and easy way and aims to compare it to what is available in the cloud. Why do you want to run a local LLM you may ask? Firstly the fact that it can be run offline and all of your prompt details are not sent to a third party (i.e. OpenAI / meta, google, etc.) and secondly because you can :)
For this post I am using LMStudio (https://lmstudio.ai/) which is extremely simple to download and install (uses approx 500MB of space). Once installed you are prompted to choose a model with the default being OpenAI's gpt-oss-20b (~12GB) - which you can read more about here.
No comments:
Post a Comment