• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    14
    arrow-down
    3
    ·
    3 hours ago

    Ironically, this is a great case study to illustrate the value of Chinese models. They’ve released a number that are on par with Claude’s latest models under “open weight” licenses that would allow you to run them yourselves if you wanted to, or to hire some other third party to provide API access. It wouldn’t matter what the original company’s “usage policy” is in that case.

    There are a couple of Western open models that aren’t bad either, but they tend to be aimed at a smaller and simpler use case than Claude.

    • EtAl@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 hours ago

      What models exactly? And what kind of hardware do you need to run them? Also, are there any GitHub repos that replicate Claude projects?

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      Quick, another fix of the LLMs! Let’s not think about what the downtime means for the industry.

  • LordCrom@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    4 hours ago

    This is true for any company using 3rd party services. I worked for one that used a 3rd party messaging service to send out mfa texts to users. The company was hacked and went offline, so we couldnt send any mfa codes… and of course, they had no plan b.

    In business, always have a backup

    • EndlessNightmare@reddthat.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      This is true for any company using 3rd party services.

      It’s like when a streamer / content-creator gets “deplatformed” from whatever service and they had put all their eggs in that basket.

  • Ludicrous0251@piefed.zip
    link
    fedilink
    English
    arrow-up
    25
    ·
    6 hours ago

    Either they didn’t pay, they found an exploit, or, more likely, someone at Claude was reviewing their conversations. Take note, any business that cares about IP or confidentiality.

    • PetteriPano@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      5 hours ago

      I’ll bring two theories to the table.

      a) they got caught distilling for their own models b) they re-sold their $200/mo plans as APIs

  • ulkesh@piefed.social
    link
    fedilink
    English
    arrow-up
    60
    ·
    8 hours ago

    Or… taps mic… don’t fucking rely on AI for your business! Play stupid games, win stupid prizes.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      edit-2
      7 hours ago

      This has nothing to do with AI.

      Don’t rely on software or workflows or really anything that you can’t easily switch if said company decides to stop doing business with you.

      If you do, it better be a strategic partnership where something like this can’t happen.

      In this case, their workflows should have been AI provider agnostic or had a way to continue functioning if Claude went down.

      • ulkesh@piefed.social
        link
        fedilink
        English
        arrow-up
        18
        ·
        6 hours ago

        This definitely has to do with AI. Because CEOs are losing their stupid minds over it. I agree with you in principle, but let’s not lose sight of the fact that this specific technology is what CEOs are drooling over. Even in my company I had to tell the owner/CEO, “What problem are you trying to solve with AI?” His response was his mouth being open with a dumb look on his face.

        So no business should rely on AI (or, to your point, any software) that it becomes detrimental to their business or workforce should that access be revoked.

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 hours ago

          Yes, this has everything to do with AI, because this is an AI vendor locking out a customer from their ordinary workflow.

          At the same time, this is a generalizable example not limited to AI, where any form of vendor lock-in on a critical business function becomes a potential point of failure when the vendor drops the customer or stops working. It’s true of a cloud provider, an email provider, an ISP, any software provider that can revoke access/authority, or even non-tech vendors like a landlord or a temp agency or an electric utility.

      • traxex@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        Vendor lock in for today’s software is almost impossible to avoid unless you are running on owned bare metal which is not really an option for many mid size companies.

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          4 hours ago

          It can be hard, but its not impossible for many things.

          Like if you use AWS S3 there are S3 compatible APIs at cloudflare and likely other cloud providers.

          If youre using a service that offers cloud functions and one offers the programming language you want to use, but others dont, maybe its better to use the more common language that all the platforms offer even if its not youre preferred choice.

          If you were using Slack, have a plan to switch quickly to Teams if something goes wrong and slack drops you so you can get comms up quicker.

          For those where alternatives arent an option, it should be a very conscious choice with the knowledge it might bite you in the ass with no quick recovery.

          • traxex@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 hours ago

            I absolutely agree. Many just don’t think the benefit of being nimble is worth it. Glad to see it being a bigger discussion.

            • NotMyOldRedditName@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              27 minutes ago

              I was (un?)fortunate to work at a company early on in my development career that ran into a problem where poor design choices (not mine) limited our ability to be nimble. Ive been able to carry that lesson on. Not that im perfect at it either though haha.

              It has worked out to my benefit many times though.

  • one_old_coder@piefed.social
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    edit-2
    8 hours ago

    60 employees were dead in the water, as reportedly their daily workflows rely on the AI assistant’s

    Is that a joke? 60 employees do not know how to do their job? This is not Anthropic’s problem.

    • funkless_eck@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      I throw any bullshit task into AI. I’m to produce a monthly report on my strategic wins and goals for the next month. I throw it in AI, don’t read it, paste it in the Google doc, send it to the PM who sends it to my boss who also doesn’t read it (or uses AI to read it).

      Now I know how to write it but writing this report would take me a day or two if I carefully did it, or 3 mins with AI.

    • traxex@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      5 hours ago

      Not necessarily, they could have just a piece of the entire build/deploy process that requires some access to Claude to complete and have no real easy way to turn it off. Like multiple CI/CD steps reaching out for validation or something stupid.

    • terabyterex@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 hours ago

      This is the current web with its social media like life. Say something, be outraged.

      But let’s be honest. We really have no idea what to the true story is. There are so many ways to spin a story. Probably both sides fucked up.

      The one thing we know is fucked up, is that anthropic is acting like a startup. If they want to work with businesses they need a dedicated support team.

      I dont know if its because its bern a long day and i am exhauseted but i am tired of being outraged.

    • Pommes_für_dein_Balg@feddit.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      7 hours ago

      This makes me so happy about my employer. I’m sysadmin for a newspaper.
      We had an all-company test run 2 weeks ago to answer the question “What if we’re hacked?”

      Turns out we’re able to produce a printed and online newspaper within a work day if NONE of our normal IT systems (hardware, software, e-mail, network) are accessible.
      Everything we need has a redundancy that’s kept completely physically separated from the network until the day it’s needed.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      48
      arrow-down
      1
      ·
      11 hours ago

      My company is pivoting hard to Claude for everything, and besides the fact that it’s irritating as fuck to use, it has me worried about shenanigans like in this article. For almost 50 years, they’ve had a “no reliance upon 3rd party platforms for core functions,” but since they hired an AI apologist to the C-suite, all that has gone out the window in a matter of months.

      Got me thinking I should warm up my resume…

      • criss_cross@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        38 minutes ago

        My company is doing the exact same thing.

        Why everyone is so eager to add an expensive middleman into their workflow is beyond me.

      • BlameTheAntifa@lemmy.world
        link
        fedilink
        English
        arrow-up
        30
        ·
        10 hours ago

        Got me thinking I should warm up my resume…

        Don’t wait, start now. The job market is a nightmare and finding one that isn’t being consumed by incompetent C-level AI FOMO is getting harder every day. I work on life-saving medical equipment and AI is being pushed on us for things that could literally kill people if not done correctly. Why would anyone spend 30 minutes using AI and risking people’s lives when I can just write it myself in 5 or 10? Madness. Complete, society-scale madness. The people pushing AI have no fucking idea what they are doing or how engineering works. People are going to die.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 hours ago

        If you’re being forced to use it, just try to convince them to make whatever workflows you use be AI agnostic, and not required to still function.

        As long as you do that, you won’t run into this.

        • Telorand@reddthat.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 hours ago

          That’s essentially what I’m doing right now, and thus far, they still want workers who understand the code. However, my manager has already said that his boss had it compose a few scripts, and he thought he could therefore replace an entire workflow.

          Thankfully, my manager talked him down and pointed out that it still got several nontrivial things wrong and that taking humans out is dangerous when it comes time to push to production.

          But it’s concerning to see that the higher ups don’t understand what it is and what its limitations are.

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            his boss had it compose a few scripts, and he thought he could therefore replace an entire workflow.

            yikes! How long until his boss is like, you’re getting in the way of my plans and are wrong, fires that boss for a yes man, and then boom.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      70
      arrow-down
      1
      ·
      13 hours ago

      Your point is well-taken, but this is also exactly why AI reliance is dangerous. Anyone who sees this should realize the precarity of relying on products that can just be locked away from you.

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        37 minutes ago

        I don’t know why any company with hundreds of thousands of dollars to spend on commercial LLM APIs wouldn’t just build and self-host their own LLM fine-tuned on data relevant to their work…

      • Jrockwar@feddit.uk
        link
        fedilink
        English
        arrow-up
        14
        ·
        10 hours ago

        Like Gmail? Google drive? Slack?

        I’m not defending AI, but I can come up with >10 products that would absolutely cripple the company I work at if the provider suddenly says “Soz, terms of service violation”.

        Vendor reliance is dangerous. That doesn’t just apply to AI. If the company in OP’s message had both Claude and Gemini they’d been okay, so the problem isn’t with AI explicitly - the problem is with reliance on services that are critical for workflows, and providers being able to change their mind at a moment’s notice.

        In any case, leaving aside where the problem is, the idea that 60 employees can’t use Natural Intelligence to do their jobs means there’s something really wrong with that company…

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          8 hours ago

          Vendor reliance is dangerous.

          1000% this.

          It’s a Faustian bargain, a company gives up all of their internal IT staff and hardware and becomes completely dependent on a vendor for critical business processes. It’s like the opposite of insurance, they’re saving some money by risking a total loss of their ability to do business should the vendor pull support.

      • plyth@feddit.org
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        1
        ·
        13 hours ago

        Windows 11, Onedrive, Intel Management Engine, Google accounts, …

      • Shizzymcjizzles@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        14
        ·
        13 hours ago

        It’s not that they can’t be productive. Right now at least, what AI does is amplify how much work you can do. One of my friends codes for a big company that uses state of the art Claude models and he says that the system does 80-90% of the coding grunt work and the job is more of an editor and making sure everything is correctly annotated so that humans can understand what’s happening in the code in the future. This means that work that might have taken months he can complete in a week or two.

        • RedstoneValley@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          22
          arrow-down
          2
          ·
          12 hours ago

          This approach to coding is exactly what creates the problem. They will find out the hard way if they can continue to be productive when something breaks and AI is not available for whatever reason. Does anyone know how to fix it? Is the documentation sufficient to understand what the AI did?

          • Shizzymcjizzles@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            8
            ·
            11 hours ago

            My friend said early AI iterations were really bad at being opaque and that even now if you’re having it design the core architecture you’re going to have the problems you mentioned. But his job has basically changed to being focused mostly on being that architect. Using the metaphor of constructing a building. He used to have to do a lot of manual labor too, not just be an architect. Now he just has to tell the AI system what to build AND how. But the majority of the actual “construction” work is done by the AI system.

            • ramble81@lemmy.zip
              link
              fedilink
              English
              arrow-up
              9
              ·
              11 hours ago

              To continue with the analogy though, how many architects create things that an engineer takes one look at and laughs at because it’s structurally impossible (hint: a lot). Knowing the deep parts of the code and how it works becomes even more invaluable otherwise you risk Chinese building practices (quick, looks good, falls apart quickly).

              • Shizzymcjizzles@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                2
                ·
                9 hours ago

                My friend is a full stack programmer with over 15 years experience with one of the largest financial institutions. So he can handle what you’re talking about no problem. But what IS a huge problem is that the reason he has the requisite knowledge now is because he spent years learning best practices by doing the grunt work that’s going to disappear. So in a few years they might no longer have people with the skills to do things right and then what you’re describing will absolutely happen and build quality will go to hell. The assumption from big tech is by then the models will have improved enough it won’t matter by then.

                • ramble81@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  arrow-down
                  1
                  ·
                  9 hours ago

                  That’s a hell of an assumption. Since we’re whipping out credentials, I’ve been in IT almost 30 years and I can tell you it’s not going to work like that.

              • benjirenji@slrpnk.net
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                4
                ·
                10 hours ago

                At least in my experience these models are pretty good now to write code based on best practices. If you ask for impractical things they will start doing ugly shortcuts or workarounds. A good eye catches these and you either rerun with a refined prompt, fix your own design or just keep telling it how you want to have it fixed.

                You still gotta know how good code looks like to write it, but the models can help a lot.

                • RedstoneValley@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  8 hours ago

                  I don’t doubt that it is possible to create good code when focusing on programming best practices etc. and taking the time to check the AI output thoroughly. Time however is a luxury most of the devs in those companies don’t have, because they are expected to have a 10x code output. And thats why the shit hits the fan. Bad code gets reviewed under pressure, reviewers burn out or bore out and the codebase deteriorates over time.

                • Shizzymcjizzles@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  9 hours ago

                  This is what I’m hearing too. One thing my friend did mention was that without a nearly unlimited amount of tokens he’d run out really quickly.

    • Jmdatcs@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      10 hours ago

      Fuck AI and all, but to be faaaiiiiir, if you take away most people’s computers they would be far less efficient than someone that did the same job without one 50 years ago.

      In the profession I recently retired from, if they suddenly went back 50 years in tech the global economy would crash, and even a 20-30 year regression in tech would seriously fuck things up until people adjusted. And even then they wouldn’t be able to reach the same levels of efficiency.

      • partofthevoice@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        Yeah, I think this is normal. You can probably say the exact same sentence for any year to have occurred in the last several hundred years. Probably all the way back to whenever we transitioned to specialization for production scaling. You know, when someone figured out you can make more clocks per day if you have a nut producer, a spring producer, a frame producer, …

    • Carnelian@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      13 hours ago

      Regardless of the fact that work has ground to a halt the CEO will continue to claim productivity has never been higher since implementing AI

    • greenbit@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 hours ago

      Eh consider it like a power outage. These corporations don’t deserve more than automated slop. If that system is down, it’s an earned break

    • wakko@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      13
      ·
      12 hours ago

      Funny how nobody seems to use this argument every time there’s a problem with the NYC subway.

      • Telorand@reddthat.com
        link
        fedilink
        English
        arrow-up
        12
        ·
        11 hours ago

        Because there’s alternatives. You don’t have to use the subway if it breaks down, and people have enough brains to take a taxi or walk instead.

        This is 60 people going, “Fuck, the subway is down. Guess I can’t travel anywhere, now.”

      • Zak@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        12 hours ago

        Based on a quick web search, staff can only remove people temporarily for rule violations; it takes a court order to get a long-term ban from the NYC subway.

        • wakko@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          The point is, literally nobody reacts to subway malfunctions with, “and we call this progress???” as if returning to previous modes of transport is somehow the right answer to problems with far less drastic solutions than throwing the baby out with the bathwater.

          LLMs are a new technology that people are still figuring out how to use effectively. Part of that process is becoming reliant upon “the new way of doing things” to prove that one can rely on it. Clearly, there’s more work to be done. (My dayjob includes working on this same reliability problem.)

          One can argue the wisdom of being an early adopter in any new technology. Some thirty years ago, I was told I was insane for going all-in on Linux. The times change. The sanctimoniousness of the peanut gallery hasn’t. The lunatics betting the farm on all that wacky open source stuff three decades ago turned out to have been largely right, despite the numerous failed ventures involved in getting to here.

          This is just how the new technology cycle works. With every new tech, a whole lot of people discover all of the ways it doesn’t work before somebody figures out the way to make it work more reliably than any alternative.

    • ALoafOfBread@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      34
      ·
      edit-2
      11 hours ago

      Disliking AI is fine and good. But that is a really dumb argument.

      “60 employees who can’t be productive without the internet? And this is progress?”

      “60 employees who can’t be productive without computers? And this is progress?”

      “60 scribes who can’t be productive without clay tablets? And this is progress?”

      Etc.

      Edit: LLMs/AI are going to change some things. They are going to make (shitty) coding and various automations much more accessible. They are probably not a revolutionary technology like computers/internet, but that they could be a core part of some people’s workflow is absolutely not unthinkable. It has been shown that there have not, so far, been major boons to productivity on the whole, but that doesn’t mean they don’t have some use cases.

      • mabeledo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 hours ago

        AI is a non essential tool. Anything that a chatbot produces, can and should be achievable by a human with access to the same sources of information. Anyone hired to do a specialist job, who cannot perform without access to AI, should be summarily fired because their output would be indistinguishable from that of their LLM of choice.

        In contrast, the Internet (as massive interconnected network), computers, even books, enable humans to deal with information in ways impossible to achieve without them, and help augment us. Reading feeds your brain. Computers are a window to creativity. AI does nothing of the sort, in fact I believe it does the opposite, pushing us to outsource our thinking processes while making us feel smart, undeservedly.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        36
        arrow-down
        4
        ·
        13 hours ago

        One is a deterministic machine on your desk, that you own, to do stuff at your desk.

        The other is a nondeterministic thing somewhere else, that you don’t own, to do stuff at your desk.

        • ripcord@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          That’s really an argument against all cloud services, and not LLMs. Although most people do LLMs in the cloud.

          And I absolutely agree with the argument. It’s insane to me how much companies will put in someone else’s hands.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            It is an argument against the false comparison I was responding to, no more. Although the fact AI companies can’t seem to create a profitable or finished product even with subsidies, points to other issues I have not addressed

              • XLE@piefed.social
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                10 hours ago

                I was talking about a false dichotomy (before the person I replied to edited their comment to save face)

                what are you talking about

                • lIlIlIlIlIlIl@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  6
                  ·
                  10 hours ago

                  You people are like flat earthers with this AI hatred.

                  It’s genuinely fascinating and useful. You’re allowed to hate the companies and evil behind it, but the kid in me is still enthralled by this technology.

                  It’s just getting weird at this point.

      • atrielienz@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        In the military we have a maintenance tracking system. It’s electronic. We literally bdo drills for if it goes down and we have to resort to paper backups. And there are paper backups.

        Without a computer I could still manage an entire flight line worth of planes, and everything they need. Maintenance, fueling, sorties, etc. What you’re telling me is that this company and lots of companies do not have a contingency for if there is a system failure or other outage.

        That seems acceptable? Why? Short of a power outage (and probably not even then unless we can’t Jerryrig a lighting solution) we can do all the jobs required with hand tools. It’s crazy to think that people don’t think this should be a thing.

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Yeah, ok. But the military is explicitly supposed to keep functioning when the backend gets nuked literally. Who wants to pay for that kind of redundancy just so that some people can watch Netflix while they’re dying of radiation poisoning?

          • atrielienz@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            Hopefully companies relying on other companies like crowdstrike.

            What are we paying for if not to have things work and have backups? I have so many questions about the companies you give your money to and what you think you’re getting in return?

            Like. I feel like there’s a lot of jobs where email could fail/crash and work could still be done. The whole company shouldn’t just shut down because the AI is down. It shouldn’t shut down because email is down. That’s not just poor planning it’s really poor business practice.

            What did they do before the AI? Why (when considering how temperamental LLMs can be) would anyone trust it to such an extent that you’re dead in the water if it fails?

      • expr@piefed.social
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        4
        ·
        13 hours ago

        Except, unlike computers and the internet, AI is not essential, unless your whole business revolves around it (in which case, good riddance).

        • boonhet@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 hours ago

          Uhhhh computers and Internet aren’t essential either. But they did speed up a lot of things and make new things possible.

          There’s nothing I use AI for that I couldn’t do myself, but AI can do most things faster and a few things better than me (LLMs that is. Image generators do all their things better than me because I can’t art, but I don’t use them at all).

      • CompactFlax@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        4
        ·
        12 hours ago

        If the Internet is down for a period of time at the office, I would expect that my dev team is able to continue working (assuming they’re not exclusively hitting a third party API). At least for a few hours, if not days. It might not be the same cadence, but I’m not about to send them home.

        Computers are a tool; AI is an outsourcing. It’s the difference between a carpentry team not having saws, hammers, etc. and having the carpentry team unable to do work if Jose (the outsourced carpenter) doesn’t come in.

  • Tamps@feddit.uk
    link
    fedilink
    English
    arrow-up
    138
    ·
    13 hours ago

    Just another form of vendor lock-in. If your business model is mostly/entirely dependent on an external party, that should be a well understood risk.

    • shirasho@feddit.online
      link
      fedilink
      English
      arrow-up
      33
      ·
      11 hours ago

      I am responsible for gathering information on AI to determine whether we should use it for our next project. The ask was to use it for a critical process task. Immediately in my head I was like “no, we are not using AI at all”, but I obviously need quantifiable data. This is just another thing to add to my list of why using AI for core processes is one of the stupidest things you could ever do.

      • boonhet@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        Well you could also spec out a few machines for local LLMs as a sensitive alternative to show the higher ups. “This is what it’ll cost us if we don’t want to be caught with nothing but our dicks in our hands when a vendor decides to shut us down for actually using the shit we’re paying for” and “it’ll end up saving us money eventually” (if you can male the case for that, you’ll have to do your own calculations).

        Keep in mind the top of the line models will require some 600-700 GB of VRAM IIRC, may want to check ollama for examples. And you’d want redundancy of course, not a single machine.

        Capex will usually seem more sensible to businesses than opex since it’s a one time thing, but this should be big enough to deter them unless you work at a really big company.

        But also, what type of task is it? Perhaps AI is not a bad fit, just LLMs. There are plenty of decent use cases for other types of AI. For an example you could tell if something is a hot dog or not with pretty good accuracy.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    13 hours ago

    Fucking hilarious that the “best” chatbot can’t even manage a decent support chatbot…

      • PlantJam@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        13 hours ago

        Salesforce recently got rid of their “crate a case” form and replaced it with a chat bot to do the exact same thing. Of course it tries to talk you out of creating a case first, but will begrudgingly create one eventually. It’s one of the most asinine uses for a chat bot I’ve ever seen.

    • yeehaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      11 hours ago

      Because it doesn’t work 🤣. I can think of maybe once in my life where a chat bot was able to answer my question.