• 0 Posts
  • 41 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle


  • Ideally you want something that gracefully degrades.

    So, my media library is hosted by Plex/Jellyfin and a bunch of complex firewall and reverse proxy stuff… And it’s replicated using Syncthing. But at the end of the day it’s on an external HDD that they can plug into a regular old laptop and browse on pretty much any OS.

    Same story for old family photos (Photoprism, indexing a directory tree on a Synology NAS) and regular files (mostly just direct SMB mounts on the same NAS).

    Backups are a bit more complex, but I also have fairly detailed disaster recovery plans that explain how to decrypt/restore backups and access admin functions, if I’m not available (in the grim scenario, dead - but also maybe just overseas or otherwise indisposed) when something bad happens.

    Aside from that, I always make sure that all of all the selfhosting stuff in my family home is entirely separate from the network infra. No DNS, DHCP or anything else ever runs on my hosting infra.


  • rho50@lemmy.nztoSelfhosted@lemmy.worldLlama-FS Self-Organizing File Manager
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    5 months ago

    It would be better to have this as a FUSE filesystem though - you mount it on an empty directory, point the tool at your unorganised data and let it run its indexing and LLM categorisation/labelling, and your files are resurfaced under the mountpoint without any potentially damaging changes to the original data.

    The other option would be just generating a bunch of symlinks, but I personally feel a FUSE implementation would be cleaner.

    It’s pretty clear that actually renaming the original files based on the output of an LLM is a bad idea though.


  • (6.9-4.2)/(2024-2018) = 0.45 “version increments” per year.

    4.2/(2018-1991) = 0.15 “version increments” per year.

    So, the pace of version increases in the past 6 years has been around triple the average from the previous 27 years, since Linux’ first release.

    I guess I can see why 6.9 would seem pretty dramatic for long-time Linux users.

    I wonder whether development has actually accelerated, or if this is just a change in the approach to the release/versioning process.


  • The DJI Fly app is probably considerably worse for security/privacy than most Google apps. DJI has a storied history of sketchy practices in their apps: see here.

    Google also won’t allow DJI to distribute their apps through the Play Store, because of DJI’s weird insistence on being able to push arbitrary binaries to customers’ phones entirely free of any third party vetting.

    GrapheneOS’ sandbox hardening might help somewhat, but I’d recommend avoiding DJI products if you can. If you must use DJI Fly, prefer to use it in a different profile where it can’t touch any of your personal apps. Tough when they are singularly the best drone manufacturer for videography though.




  • You can restrict what gets installed by running your own repos and locking the machines to only use those (either give employees accounts with no sudo access, or have monitoring that alerts when repo configs are changed).

    So once you are in that zone you do need some fast acting reactive tools that keep watch for viruses.

    For anti-malware, I don’t think there are very many agents available to the public that work well on Linux, but they do exist inside big companies that use Linux for their employee environments. For forensics and incident response there is GRR, which has Linux support.

    Canonical may have some offering in this space, but I’m not familiar with their products.




  • rho50@lemmy.nztoTechnology@beehaw.orgBut Claude said tumor!
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    7 months ago

    I don’t think it’s necessarily a bad thing that an AI got it wrong.

    I think the bigger issue is why the AI model got it wrong. It got the diagnosis wrong because it is a language model and is fundamentally not fit for use as a diagnostic tool. Not even a screening/aid tool for physicians.

    There are AI tools designed for medical diagnoses, and those are indeed a major value-add for patients and physicians.



  • Exactly. So the organisations creating and serving these models need to be clearer about the fact that they’re not general purpose intelligence, and are in fact contextual language generators.

    I’ve seen demos of the models used as actual diagnostic aids, and they’re not LLMs (plus require a doctor to verify the result).


  • There are some very impressive AI/ML technologies that are already in use as part of existing medical software systems (think: a model that highlights suspicious areas on an MRI, or even suggests differential diagnoses). Further, other models have been built and demonstrated to perform extremely well on sample datasets.

    Funnily enough, those systems aren’t using language models 🙄

    (There is Google’s Med-PaLM, but I suspect it wasn’t very useful in practice, which is why we haven’t heard anything since the original announcement.)



  • I know of at least one other case in my social network where GPT-4 identified a gas bubble in someone’s large bowel as “likely to be an aggressive malignancy.” Leading to said person fully expecting they’d be dead by July, when in fact they were perfectly healthy.

    These things are not ready for primetime, and certainly not capable of doing the stuff that most people think they are.

    The misinformation is causing real harm.