• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: March 8th, 2024

help-circle

  • You didn’t, I did. The starting models cap at 24, but you can spec up the biggest one up to 64GB. I should have clicked through to the customization page before reporting what was available.

    That is still cheaper than a 5090, so it’s not that clear cut. I think it depends on what you’re trying to set up and how much money you’re willing to burn. Sometimes literally, the Mac will also be more power efficient than a honker of an Nvidia 90 class card.

    Honestly, all I have for recommendations is that I’d rather scale up than down. I mean, unless you also want to play kickass games at insane framerates with path tracing or something. Then go nuts with your big boy GPUs, who cares.

    But for LLM stuff strictly I’d start by repurposing what I have around, hitting a speed limit and then scaling up to maybe something with a lot of shared RAM (including a Mac Mini if you’re into those) and keep rinsing and repeating. I don’t know that I personally am in the market for AI-specific muti-thousand APUs with a hundred plus gigs of RAM yet.


  • Thing is, you can trade off speed for quality. For coding support you can settle for Llama 3.2 or a smaller deepseek-r1 and still get most of what you need on a smaller GPU, then scale up to a bigger model that will run slower if you need something cleaner. I’ve had a small laptop with 16 GB of total memory and a 4060 mobile serving as a makeshift home server with a LLM and a few other things and… well, it’s not instant, but I can get the sort of thing you need out of it.

    Sure, if I’m digging in and want something faster I can run something else in my bigger PC GPU, but a lot of the time I don’t have to.

    Like I said below, though, I’m in the process of trying to move that to an Arc A770 with 16 GB of VRAM that I had just lying around because I saw it on sale for a couple hundred bucks and I needed a temporary GPU replacement for a smaller PC. I’ve tried running LLMs on it before and it’s not… super fast, but it’ll do what you want for 14B models just fine. That’s going to be your sweet spot on home GPUs anyway, anything larger than 16GB and you’re talking 3090, 4090 or 5090, pretty much exclusively.


  • This is… mostly right, but I have to say, macs with 16 gigs of shared memory aren’t all that, you can get many other alternatives with similar memory distributions, although not as fast.

    A bunch of vendors are starting to lean on this by providing small, weaker PCs with a BIG cache of shared RAM. That new Framework desktop with an AMD APU specs up to 128 GB of shared memory, while the mac minis everybody is hyping up for this cap at 24 GB instead.

    I’d strongly recommend starting with a mid-sized GPU on a desktop PC. Intel ships the A770 with 16GB of RAM and the B580 with 12 and they’re both dirt cheap. You can still get a 3060 with 12 GB for similar prices, too. I’m not sure how they benchmark relative to each other on LLM tasks, but I’m sure one can look it up. Cheap as the entry level mac mini is, all of those are cheaper if you already have a PC up and running, and the total amount of dedicated RAM you get is very comparable.




  • Yes. They are wrong. Which is why I won’t agree with them just because the criminal in question is one of theirs.

    And yes, I’m worried that fascists have a history of weaponizing institutions against their enemies when they control them and presenting themselves as victims and eroding those same institutions when democratic processes hold them accountable for criminal behavior.

    It’s why, while I don’t question the criminal outcome, I would have politically preferred for them to lose support electorally before this happened. They are likely to try to capitalize on presenting this as persecution and, looking at historical comparables, they are likely to succeed. It’s not like Marine is so charismatic that her absence decapitates the movement by default. She’s no Trump.

    Still, I’m not objecting to her being inhabilitated or jailed. I just hope the rest of the French political spectrum has a plan to manage the fallout, because so far they haven’t even been able to manage the fallout of actually not losing an election, so from the outside looking in my level of trust is low and Europe can’t afford to have France spiral down into fascism as well.


  • It’s your prerogative, but I will clarify the point.

    For one thing, her “not reward” is not a “not reward”, it is an actual punishment, codified in the criminal code of many democratic countries, where the penalty is the removal of the right to participate in elections or hold public office. This is a right all citizens have that is removed for a period of time as a punishment for a crime. It is a literal punishment. You are factually wrong.

    Second, naming fallacies doesn’t meant hey happened. I did not bring up anybody else into this conversation, so not whataboutism, I did not misquote or rephrase your argument, so no strawman and the fact that I pointed out an inconsistency in your point doesn’t mean I “distorted” it.

    And finally, I am not primed to “defend scum like her”. I have not, in fact, defended her at any point. She’s been found guilty of a crime, which makes her a criminal. What I am not is a demagogue willing to argue that harsher penalties, and specifically harsher penalties for people I don’t like, are the correct solution when every piece of serious research and information I have says they’re not. If it doesn’t help when the US does it to poor people for racist reasons it doesn’t help when aimed at politicians. Criminal penalties must be dissuasive, but that bar is pretty low and there is no proof that harsher penalties lead to more compliance.






  • This. People NEED to stop anthropomorphising chatbots. Both to hype them up and to criticise them.

    I mean, I’d argue that you’re even assigned a loop that probably doesn’t exist by seeing this as a seed for future training. Most likely all of these responses are at most hallucinations based on the millions of bullshit tweets people make about the guy and his typical behavior and nothing else.

    But fundamentally, if a reporter reports on a factual claim made by an AI on how it’s put together or trained, that reporter is most likely not a credible source of info about this tech.

    Importantly, that’s not the same as a savvy reporter probing an AI to see which questions it’s been hardcoded to avoid responding or to respond a certain way. You can definitely identify guardrails by testing a chatbot. And I realize most people can’t tell the difference between both types of reporting, which is part of the problem… but there is one.


  • To be fair the security concerns they are referencing aren’t about the model itself, but instead about their self-hosted version used via some mobile or web app interface. Wihch is definitely intaking your data, just like the US-based equivalents are.

    Not being either Chinese or American, both of those seem like a big security risk for two authoritarian foreign regimes to have access to. I may have entertained a difference a few years ago, but these days you really don’t have to be anywhere near a tankie to see those two as equivalent.

    If you’re going to run a LLM for something, do it locally.


  • Here’s the fun part about this leak and why the US should be treated as a foreign adversary for the foreseeable future:

    Trump is apparently softballing this. Definitely the easier to manipulate, less ideologically aggressive one in this circle. In his first term he was the idiot in the room while the adults tried to manage him. Now he’s the idiot in the room while the rest of the room is aggressive fascists trying to create a new axis.

    Even if Trump had a heart attack tomorrow the US would remain part of a new fascist entente. This requires fundamental realignments across the world.

    Note that this is true whether this is a legitimate screwup or some weird deliberate propaganda action.