• 2 Posts
  • 223 Comments
Joined 7 months ago
cake
Cake day: March 22nd, 2024

help-circle











  • According to an anonymous European diplomat, if Democratic candidate Kamala Harris wins the US presidential election, it can be assumed that Joe Biden will start working on an invitation to Ukraine during the transition period.

    The implication though… what if she doesn’t?

    I guess they can’t invite Ukraine in the transition period if Trump would shoot it down? But why can’t they just rush the ratification?



  • has so far only been used to train Musk’s own Grok AI

    The Grok models are a laughing stock in the LLM space. They aren’t good over APIs, and they’re even less useful after Elon “open sources” them far later. Qwen 72B, and heck, Qwen 32B is already better than Grok 2, which is probably hundreds of billions of parameters. Qwen is runnable locally right now, Apache 2.0, and released day one. Grok 1 is… well, I dunno, no one has even bothered to try hosting it for anything.

    I dunno what Twitter is doing with all those H100s Elon hoarded, but it seems like a big waste so far. Its certainly nothing to help the open source/self hosting space or to “decensor” and “democratize” LLMs like Elon fans seem to think.






  • Basically the only thing that matters for LLM hosting is VRAM capacity. Hence AMD GPUs can be OK for LLM running, especially if a used 3090/P40 isn’t an option for you. It works fine, and the 7900/6700 are like the only sanely priced 24GB/16GB cards out there.

    I have a 3090, and it’s still a giant pain with wayland, so much that I use my AMD IGP for display output and Nvidia still somehow breaks things. Hence I just do all my gaming in Windows TBH.

    CPU doesn’t matter for llm running, cheap out with a 12600K, 5600, 5700x3d or whatever. And the single-ccd x3d chips are still king for gaming AFAIK.