• @NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      3
      edit-2
      1 year ago

      Check this out

      https://github.com/oobabooga/text-generation-webui

      It has a one click installer and can use llama.cpp

      From there you can download models and try things out.

      If you don’t have a really good graphics card, maybe start with 7b models. Then you can try 13b and compare performance and results.

      Llama.cpp will spread the load over the cpu and as much gpu as you have available (indicated by layers that you can set on a slider)