

NPUs are meant to offload specific tasks, those which CPUs do less well than GPUs, but while using less electricity than both. So far as manufacturers say.
Thanks for the tips!
NPUs are meant to offload specific tasks, those which CPUs do less well than GPUs, but while using less electricity than both. So far as manufacturers say.
Thanks for the tips!
That’s fair. I suppose I took their marketing statements for NPUs closer to face value than what feasibly works.
I was hoping for something entry level to work with/recreate/train smaller models to avoid additional investment in extra services. But, outside of simply running the apps the companies are pushing into their devices, and some other community ones, local maybe doesn’t have much use without massive hardware.
Wdyt?
From what I’ve seen, AMD only has compiling models on Linux, Intel has some OpenVINO or something which allows usage of models. But I might be wrong.
Otherwise, NPU use seems limited to mostly Windows currently.
Edit: I just found this. https://github.com/amd/gaia
Though, still limited. Just wanted to put it out there for people.
Thank you.
Those are very expensive machines, but certainly would be capable of the work. I’m not sure I can afford it.