• Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    2
    ·
    9 months ago

    Why am I feeling it isn’t going to be a repeat of the standards-driven co-operative development supported by open source software infrastructure that occurred during the decade and a half after the dotcom bubble… I have a feeling it would resemble the pre mass computing world of AT&T, GE and IBM.

    • andyburke@fedia.io
      link
      fedilink
      arrow-up
      26
      arrow-down
      1
      ·
      9 months ago

      There are a lot of open source LLMs being developed, ones you can run at home on your own data.

        • LainTrain@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          9 months ago

          What would be the threshold for them to “take off”? It’s all already out, so already there no?

          • umbrella@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            its been a while, but last i tried it wasnt as good as the proprietary models.

              • umbrella@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                9 months ago

                i tried the llama model for text, and another one meant for images i cant quite remember the name but it was one of the main ones.

                are they any good now? running an llm actually sounds mildly useful.

                  • LainTrain@lemmy.dbzer0.com
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    9 months ago

                    Honestly i think speed is something I don’t care too much about with models, because even things like ChatGPT will be slower than Google for most things, and if something is more complex and a good use case for an LLM it’s unlikely to be the primary bottleneck.

                    My gf private chat bot right now is a combination of Mistral 7B with a custom finetune and she it directs some queries to ChatGPT if I ask (I got free tokens way back might as well burn through them).

                    How much of an improvement is Mixtral over Mistral in practice?