• pycorax@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 hours ago

    It’s using graphene so we’ll see this as soon as the 100s of graphene innovations come too in who knows when?

  • aMockTie@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    ·
    16 hours ago

    For those, like me, who wondered how much data was written in 400 picoseconds, the answer is a single bit.

    If I’m doing the math correctly, that’s write speeds in the 10s-100s GBps range.

  • Chozo@fedia.io
    link
    fedilink
    arrow-up
    17
    ·
    17 hours ago

    Other than just making everything generally faster, what would be a use-case that really benefits the most from something like this? My first thought is something like high-speed cameras; some Phantom cameras can capture hundreds, even thousands of gigabytes of data per second, so I think this tech could probably find some great applications there.

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 hours ago

      The speed of many machine learning models is bound by the speed of the memory they’re loaded on so that’s probably the biggest one.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      15 hours ago

      There’s some servers using SSDs as a direct extension of RAM. It doesn’t currently have the write endurance or the latency to fully replace RAM. This solves one of those.

      Imagine, though, if we could unify RAM and mass storage. That’s a major assumption in the memory heirarchy that goes away.

    • cmnybo@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      16 hours ago

      I doubt it would work for the buffer memory in a high speed camera. That needs to be overwritten very frequently until the camera is triggered. They didn’t say what the erase time or write endurance is. It could work for quickly dumping the RAM after triggering, but you don’t need low latency for that. A large number of normal flash chips written in parallel will work just fine.

    • aMockTie@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      16 hours ago

      The article highlights on device AI processing. Could be game changing in a lot of ways.