• merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    3
    ·
    9 days ago

    yet another reason to back flatpaks and distro-agnostic software packaging. We cant afford to use dozens of build systems to maintain dozens of functionally-identical application repositories

    • ubergeek@lemmy.today
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 days ago

      Pretty sure flatpak uses alpine as a bootstrap… Flatpak, after all, brings along an entire distro to run an app.

    • harsh3466@lemmy.ml
      link
      fedilink
      arrow-up
      11
      ·
      9 days ago

      I’m a fan of flatpaks, so this isn’t to negate your argument. Just pointing out that Flathub is also using Equinix.

      Source

      Interlude: Equinix Metal née Packet has been sponsoring our heavy-lifting servers doing actual building for the past 5 years. Unfortunately, they are shutting down, meaning we need to move out by the end of April 2025.

    • balsoft@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      9 days ago

      I don’t think it’s a solution for this, it would just mean maintaining many distro-agnostic repos. Forks and alternatives always thrive in the FOSS world.

      • merthyr1831@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 days ago

        the sandbox is the point! but yes there’s still shortcomings with the sandbox/portal implementation, but if snaps can find a way to improve the end user experience despite containerising (most) apps, then so can flatpak.

        It’s similar to how we’re at that awkward cusp of Wayland being the one and only display protocol for Linux, but we’re still living with the awkward pitfalls and caveats that come with managing such a wide-ranging FOSS project.

    • Mwa@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      9 days ago

      Let the community package it to deb,rpm etc while the devs focus on flatpak/appimage

    • LeFantome@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 days ago

      We have this guy saying we cannot build all the Alpine packages once to share with all Alpine users. Unsustainable!

      On the other hand, we have the Gentoo crowd advocating for rebuilding everything from source for every single machine.

      In the middle, we have CachyOS building the same x86-64 packages multiple times for machines with tiny differences in the CPU flags they support.

      The problem is distribution more than building anyway I would think. You could probably create enough infrastructure to support building Alpine for everybody on the free tier of Oracle Cloud. But you are not going to have enough bandwidth for everybody to download it from there.

      But Flatpak does not solve the bandwidth problem any better (it just moves the problem to somebody else).

      Then again, there are probably more Apline bits being downloaded from Docker Hub than anywhere else.

      Even though I was joking above, I kind of mean it. The article says they have two CI/CD “servers” and one dev box. This is 2025. Those can all be containers or virtual machines. I am not even joking that the free tier of Oracle Cloud ( or wherever ) would do it. To quote the web, “you can run a 4-core, 24GB machine with a 200GB disk 24/7 and it should not cost you anything. Or you can split those limits into 2 or 4 machines if you want.”

      For distribution, why not Torrent? Look for somebody to provide “high-performance” servers for downloads I guess but, in the meantime, you really do not need any infrastructure these days just to distribute things like ISO images to people.

      • merthyr1831@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        There are other costs, too. Someone has to spend a LOT of time maintaining their repos: testing and reviewing each package, responding to bugs caused by each packaging format’s choice of dependencies, and doing this for multiple branches of supported distro version! Thats a lot of man hours that could still be used for app distribution, but combined could help make even more robust and secure applications than before.

        And, if we’re honest, except for a few outliers like Nix, Gentoo, and a few others, there’s little functional difference to each package format, which simply came to exist to fill the same need before Linux was big enough to establish a “standard”.


        Aaaanyway

        I do think we could have package formats leveraging torrenting more though. It could make updates a bit harder to distribute quickly in theory but nothing fundamentally out of the realm of possibilities. Many distros even use torrents as their primary form of ISO distribution.

        • LeFantome@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          6 days ago

          I was thinking mostly of iso images I guess. You are talking about package updates.

          First, fair point.

          That said, for package updates, are there not Alpine mirrors? You do not need much bandwidth to feed out to the mirrors.

          But I agree that, ultimately, they are going to have to find a home for the package repos if they want to directly feed their install base.

          As for “the other costs”, those do not seem to have anything to do with their hosting going away.

  • ryannathans@aussie.zone
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    10 days ago

    How are they so small and underfunded? My hobby home servers and internet connection satisfy their simple requirements

      • DaPorkchop_@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        9 days ago

        That’s ~2.4Gbit/s. There are multiple residential ISPs in my area offering 10Gbit/s up for around $40/month, so even if we assume the bandwidth is significantly oversubscribed a single cheap residential internet plan should be able to handle that bandwidth no problem (let alone a for a datacenter setup which probably has 100Gbit/s links or faster)

        • synicalx@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          If you do 800TB in a month on any residential service you’re getting fair use policy’ed before the first day is over, sadly.

          • LeFantome@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            8 days ago

            With my IISP, the base package comes with 4 TB of bandwidth and I pay and extra $20 a month for “unlimited”.

            I am not sure of “unlimited” has a limit. It may. It is not in the small print though. I may just be rate limited ( 3 Gpbs ).

            • synicalx@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 days ago

              It will definitely depend on the ISP, but generally for repeated “AUP” violations they will suspend your service entirely.

              Interestingly it’s often not technically the data usage that triggers this, its how much utilisation (generally peak utilisation) you cause and high data usage is a by product of that. Bandwidth from an ISP’s core network to their various POIs that customer connections come from is generally quite expensive, and residential broadband connections are fairly low margin. So lets say they’ve got 100Gbps to your POI that could realistically service many thousands of people, a single connection worth €/$10-15 a month occupying 10% of that is cause for concern.

      • ryannathans@aussie.zone
        link
        fedilink
        arrow-up
        5
        ·
        10 days ago

        On my current internet plan I can move about 130TB/month and that’s sufficent for me, but I could upgrade plan to satisfy the requirement

        • Karna@lemmy.mlOP
          link
          fedilink
          arrow-up
          3
          ·
          9 days ago

          Your home server might have the required bandwidth but not requisite the infra to support server load (hundreds of parallel connections/downloads).

          Bandwidth is only one aspect of the problem.

          • ryannathans@aussie.zone
            link
            fedilink
            arrow-up
            4
            ·
            9 days ago

            Ten gig fibre for internal networking, enterprise SFP+ network hardware, big meaty 72 TB FreeBSD ZFS file server with plenty of cache, backup power supply and UPS

            The tech they require really isn’t expensive anymore

  • Evil_Shrubbery@lemm.ee
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    9 days ago

    We underfund our heroes, don’t we?

    (Also that monitors model name in the thumbnail “UHD 4K 2K” :D

    • Karna@lemmy.mlOP
      link
      fedilink
      arrow-up
      9
      ·
      9 days ago

      That solves the media distribution related storage issue, but not the CI/CD pipeline infra issue.

  • msage@programming.dev
    link
    fedilink
    arrow-up
    6
    ·
    8 days ago

    I will just say:

    WHAT THE FUCK are we doing? Alpine has been used in Docker, and Docker is now run everywhere.

    WHY are these necessary tools underfunded? They barely even need anything. Why do companies not support them?

    Can we start giving at least 0.000001% of net profit to the basic tools used? Can we globally force companies to give the smallest pittance to the open-source projects they make trillions off of?

    • Karna@lemmy.mlOP
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      7 days ago

      Alpine has been used in Docker, and Docker is now run everywhere

      This is exactly what came to my mind while reading through the article.