• adam@kbin.pieho.me
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    ITT people who don’t understand that generative ML models for imagery take up TB of active memory and TFLOPs of compute to process.

      • ᗪᗩᗰᑎ@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        11 months ago

        And a lot of those require models that are multiple Gigabytes in size that then need to be loaded into memory and are processed on a high end video card that would generate enough heat to ruin your phones battery if they could somehow shrink it to fit inside a phone. This just isn’t feasible on phones yet. Is it technically possible today? Yes, absolutely. Are the tradeoffs worth it? Not for the average person.