Guide
Do read, if you're not well versed with LLM's. (not that tough, but important.)
- - Download and Install LLocal.
- - If you've not installed Ollama, let LLocal guide you on that. Follow each step prompted, and LLocal will download the installer for you. This would take about 2 minutes.
- - Now that, LLocal and Ollama are both setup you can, start downloading models. This can be done through the Pull a model section in settings, which can be accessed through the command centre of the sidebar.
- - There are some recommendations below the Pull a model section, but I feel a general consumer should have slightly more knowledge on what model is correct for them. If you are not interested and just want to get going, then pull phi3 or gemma:2b.
- - So the model you use llocally depends a lot more on your machine than your choice, so if you have a dedicated graphics card with 6-8GB of VRAM you can make use of models around 7-8B parameters if you're machine has no VRAM or lesser than 6GB I would highly recommend models with 3.5B parameters and lesser to be run
(like tinyllama, gemma:2b, phi3, etc.). - - You can find out about the supported models via Ollama.ai
- - Voila! You're all set now!
For more information regarding supported models you can check out: ollama.ai/library