• 1 Post
  • 5 Comments
Joined 1 year ago
cake
Cake day: August 28th, 2023

help-circle
  • Sure, to be pedantic, I could clarify: “I think the fediverse will realistically never gain mainstream adoption without a large organization with either a massive existing userbase or the ability to invest in large organized marketing efforts.”

    This could be technically through some Fediverse collective that receives a large amount of donations, but I don’t see this as very likely to happen and even with organized marketing efforts there’s no guarantee of effectively converting this into adoption.


  • Unpopular opinion: Threads deepening ties to the fediverse is actually a really good thing for the fediverse as a whole.

    I feel like realistically the fediverse will never gain mainstream adoption on its own. People like to believe in this beautiful future where the fediverse “wins out” and beats all the major social media networks, but I just don’t see this happening. This is why I think Threads is actually really important for the growth of the fediverse and realistically one of the only paths to broad adoption.

    Beyond this, I also separately really like the idea of being able to use a platform like Threads with my irl friends while still having access to open source clients etc. (ie. preventing situations like the Twitter API debacle which fucked over 3rd party clients)




  • Sorry but has anyone in this thread actually tried running local LLMs on CPU? You can easily run a 7B model at varying levels of quantization (ie. 5 bit quantization) and get a generalized prompt-able LLM. Yeah, of course it’s going to take ~4GB of RAM (which is mem-mapped and paged into memory), but you can easily fine tune smaller more specific models (like the translation one mentioned above) and have surprising intelligence at a fraction of the resources.

    Take, for example, phi-2 which performs as well as 13B param models but with 2.7B params. Yeah, that’s still going to take 1.5GB RAM which Firefox wouldn’t reasonably ship, but many lighter weight specialized tasks could easily use something like a fine tuned 0.3B model with quantization.