I’m posting this as more of a “fun thought” than anything else.

It’s generally considered a fact that Linux, along with many other open-source software projects, are more efficient than their propriety closed-source counterparts, specifically in terms of the code that they execute.

There are numerous reasons for this, but a large contributing factor is that open-source, generally speaking, incentivises developers to write better code.

Currently, in many instances, it can be argued that Linux is often less power-efficient than its closed-source counterparts, such as Windows and OSX. However, the reason for this lies not in the operating system itself, but rather the lack of certain built-in hardware support for Linux. Yes, it’s possible to make Linux more power-efficient through configuring things differently, or optimizing certain features of your operating system, but it’s not entirely uncommon to see posts from newer Linux laptop users reporting decreased battery life for these reasons.

Taking a step back from this, though, and looking at a hypothetical world where Linux, or possibly other open-source operating systems and software holds the majority market share globally, I find it to be an interesting thought: How much more power efficient would the world be as a whole?

Of course, computing does not account for the majority of electricity and energy consumption, and I’m not claiming that we’d see radical power usage changes across the world, I’m talking specifically in relation to computing. If hardware was built for Linux, and computers came pre-installed with optimizations and fixes targetted at their specific hardware, how much energy would we be saving on each year?

Nanny Cath watching her YouTube videos, or Jonny scrolling through his Instagram feed, would be doing so in a much more energy-efficient manner.

I suppose I’m not really arguing much, just posting as an interesting thought.

  • IsoKiero@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 months ago

    Interesting thought indeed, but I highly doubt that difference is anything you could measure and there’s a ton of contributing factors, like what kind of services are running on a given host. So, in order to get a reasonable comparison you should run multiple different software with pretty much identical usage patterns on both operating systems to get any kind of comparable results.

    Also, the hardware support plays a big part. A laptop with dual GPUs and a “perfect” support from drivers on Windows would absolutely wipe the floor with Linux which couldn’t switch GPUs at the fly (I don’t know how well that scenario is supported on linux today). Same with multicore-cpu’s and their efficient usage, but I think on that the operating system plays a lot smaller role.

    However changes in hardware, like ARM CPUs, would make a huge difference globally, and at least traditionally that’s the part where linux shines on compatibility and why Macs run on batteries for longer. But in the reality, if we could squeeze more of our CPU cycles globally to do stuff more efficiently we’d just throw more stuff on them and still consume more power.

    Back when cellphones (and other rechargeable things) became mainstream their chargers were so unefficient that unplugging them actually made sense, but today our USB-bricks consume next to nothing when they’re idle so it doesn’t really matter.

    • DNAmaster10@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Yes, massively. At least with current data, I don’t imagine it would even be possible to measure this on a large scale, especially given the variation in what a computer is actually trying to do. I think it’s made even harder by the fact that software is often targetted at Windows or OSX rather than Linux, so even benchmarking software is near impossible unless you’re writing software which is able to leverage the specific unique features of Linux which make it more opimized.

      • IsoKiero@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Linux, so even benchmarking software is near impossible unless you’re writing software which is able to leverage the specific unique features of Linux which make it more opimized.

        True. I have no doubt that you could set up a linux system to calculate pi to 10 million digits (or something similar) more power efficiently than windows-based system, but that would include compiling your own kernel leaving out everything unnecesary for that particular system, shutting down a ton of daemons which is commonly run on a typical desktop and so on and waste a ton more power on testing that you could never save. And that might not even be faster, just less power hungry, but no matter what that would be far far away from any real world scenario and instead be a competition to build a hardware and software to do that very spesific thing with as little power as possible.

        • DNAmaster10@lemmy.sdf.orgOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          I think the other difficulty would be the requirement of knowing both Linux and Windows through-and-through to ensure the code you’re writing is leveraging all the os-specific advantages. But yes, it’s definitely an interesting hypothetical.

    • ReveredOxygen@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      At least with AMD on Wayland, gpu offloading works seamlessly. But I’m not sure if the GPU is actually powered off when I’m not using it; my use case is an egpu rather than a dual GPU laptop so I don’t notice battery from it. I don’t know what the situation is with Nvidia or xorg