• PlantObserver@lemmy.world
    link
    fedilink
    English
    arrow-up
    88
    arrow-down
    2
    ·
    6 months ago

    Ultra 7 155H with six P-cores, eight E-cores, and eight graphics cores; or an Ultra 7 165H with the same number of cores but marginally higher clock speeds.

    WTF is Intel smoking with these naming schemes I can’t even understand what this means. Thank fuck AMD is an option.

    • AggressivelyPassive@feddit.de
      link
      fedilink
      English
      arrow-up
      23
      ·
      6 months ago

      The number behind Ultra is pretty much the same as with the i$x scheme. 3 is entry, 5 is mid range, 7 is high end, 9 is bad decision making.

      The number after that kind of works like before. So higher number means more better. Probably with an extension for coming generation. Remember, the first i5s had 4 digit names as well, the fourth digit was prepended to indicate generations.

      Thing is, there’s no really good naming scheme, because there are so many possible variants/dimensions. Base clock, turbo clock, TDP, P core count, E core count, PCIe lanes, socket, generation ,… How would you encode that in a readable name?

      • far_university1990@feddit.de
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        3
        ·
        edit-2
        6 months ago

        just concat: intel i7 11g4p8e128l420c520b

        11 gen 4 pcore 8 ecore 128 lane 4.20ghz clock base 5.20ghz clock boost

        letter between for readable. maybe not add lane if not change for same number of pcore and ecore

        gskill do similar thing: F5-5200J3636C16GX2-FX5

        5200 mhz unbuffered dimm 36-36-36 timing 1.20v 16g per module dual channel 2 module in kit

        see here: https://www.gskill.com/faq/1502180912/DRAM-Memory

        edit: also can put architecture with letter to indicate refresh, add suffix for apu and maybe tdp

        can maybe use some letter for number: not that many different core number, make a=1pcore, b=2pcore, c=3pcore, … more than 26 pcore unlikely ever in consumer cpu. same for ecore maybe

          • far_university1990@feddit.de
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            4
            ·
            6 months ago

            Yes, can see what different between cpu without go to intel page and read spec. Not only that cpu are different.

            What mean readable to you?

            • AggressivelyPassive@feddit.de
              link
              fedilink
              English
              arrow-up
              7
              ·
              6 months ago

              For example being able to get a grasp of the rough performance from the have.

              i5 10500 is faster than i5 10400. But is 6p4e better than 4p8e?

              It’s illusionary to fit everything about a CPU into its name. What you’re proposing is essentially the entire value column of the spec sheet concatenated.

              • far_university1990@feddit.de
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                6 months ago

                if 10500 mean 6p4e and 10400 mean 4p8e, which is faster depend on workload. so compare by that not good and that how currently is.

                also if then 10900 is 12p0e, maybe not faster for gaming if game is single thread, so compare broken again. and also not good for mobile device that care about battery life. who tell you that?

                and yes, basically that just most important or most compared spec concatenated. which describe the cpu, i think a name is supposed do that.

                • AggressivelyPassive@feddit.de
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  6 months ago

                  And how many people do you think could accurately, or even ballpark, estimate their workload? I couldn’t tell you, whether my workload would benefit from more e or p cores and by how much.

                  What you’re implying here is an illusion of accuracy. You want accurate numbers for something that you can’t really judge anyway. These numbers don’t mean anything to you, they just give you the illusion of knowing what’s going on. It’s the “close door” button in an elevator.

              • far_university1990@feddit.de
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                3
                ·
                edit-2
                6 months ago

                name supposed to describe thing. too much information not the problem. if you think too long, can shorten to just enough information that different cpu have different name. which what i did.

                edit: also question was how to encode different cpu variant into name, so result require to include that information

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 months ago

        They have high power and low power cores. Borrowed the idea from “BIG.little” design from ARM.

    • tabular@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 months ago

      I can’t even understand what this means

      I think that’s the intent, and they fucking nailed it.

    • barryamelton@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      6 months ago

      It’s the intent, like “high-end” car models, so you can’t distinguish them by features or age.

  • Technus@lemmy.zip
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    5
    ·
    6 months ago

    The Core Ultra chips, like the Ryzen 7040-series chips, also include a neural processing unit (NPU) that can be used to accelerate some AI workloads. But both NPUs fall far short of the performance required for Recall and other locally accelerated AI features coming to Windows 11 24H2 later this year;

    Why even waste the fucking space on the die then?

      • Technus@lemmy.zip
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        1
        ·
        6 months ago

        I sure as hell don’t, but it seems extra pointless when it can’t even run the workloads it was designed for.

        • tedu@azorius.net
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          6 months ago

          I’m sure it still works in photoshop or whatever, just not the windows stuff.

    • fif-t@kbin.social
      link
      fedilink
      arrow-up
      25
      ·
      edit-2
      6 months ago

      Because the NPUs were designed and built and included long before Windows 11’s AI features were announced?

      If I recall correctly, it typically takes about 4 years for a CPU to go from design to distribution.

      • Technus@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        6 months ago

        Meteor Lake was taped out in May 2021 and launched in December 2023. Still much slower than the pace of LLM development, to be fair. It seems more like an “if you build it, they will come” approach. But that’s also how we got stuck with (for most consumer purposes) useless tensor cores on our GPUs. Does anyone even give a shit about raytracing/DLSS anymore?

        It actually sounds like Microsoft is betraying Intel for Qualcomm, since their upcoming processor in the new Surface tablet is the only one that actually meets the requirements. So it looks like Microsoft doesn’t give two shits about supporting existing hardware either way.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          6 months ago

          Tensor cores can be used to play chess, generate images, do realistic text to speech, do noise cancellation, content-aware fill, etc.

          They are only useless to you and other people with no imagination

          • Technus@lemmy.zip
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            1
            ·
            6 months ago

            Chess engines have outplayed humans for thirty years, and they didn’t need teraflops of computing power to do it.

            Generative AI is actively harmful to the environment, slowing the phase-out of coal in the US and guzzling billions of gallons of water. It’s likely going to kill jobs and it’s already filling the internet and the academic world with garbage. It’s also likely a bubble that will burst before long, potentially bringing the economy down with it.

            I’ll give you noise cancellation and text-to-speech, that’s pretty cool.

            But personally, I’d rather have more CUDA cores.

            • Jrockwar@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 months ago

              That middle paragraph is very misleading. It’s Generative AI as a service that is actively harmful to the environment. Having a 15 W chip to do tasks like erasing objects from a photo is not any more harmful to the environment than a GPU that uses 15W. In fact, NPUs can be more efficient at some tasks than GPUs.

              The problem is opening your phone/browser, and being able to call on demand GPT-4 to wake up a cluster of 128 Nvidia A100s operating at around 300-400W each. That’s 51.2 kW.

              Now you can draw some positives and negatives from that figure, such as

              • Given that an iPhone 15 Pro’s A17 has a thermal design power of 8 W, GPT-4 on the server is about 6400 more energy intensive than anything you can do on an iPhone. 10 seconds of GPT need a similar amount of energy to an iPhone 15 Pro operating flat out at maximum power for 18 hours. Now in those 10 seconds, OpenAI says they “handle multiple user queries simultaneously”, but still - we’re feeding the machine.
              • 51.2 kW is also roughly how much power a large SUV needs to roll at constant speed on a motorway. Each of those large clusters uses a similar amount of energy to a single 7-seater SUV, but serving many users at the same time. Plus unlike cars, a large portion of their energy usage comes from renewables. So yes, I agree that it’s a significant impact but largely overrepresented and we have bigger fish to fry; personal transport is a way bigger issue.
            • iopq@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              6 months ago

              I don’t need to outplay humans, I need to see the optimal line to analyze it. Chess is still not solved, so Leela Zero is still helpful because it’s giving better advice than older engines. Even Stockfish went neural network, but a smaller one that reads deeper. They still can’t tell us if the game from the start ends in a draw like checkers.

              Killing jobs is good. It’s already freeing people from having to write things like promotional emails. Maybe they are sad they don’t have a job anymore, but unemployment if 4%, hardly difficult to get a different one. It’s not an important job anyway, I wouldn’t feel creative to write about a labor day sale or whatever

        • ozymandias117@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          I’m so curious to see how a Qualcomm gambit plays out for Microsoft.

          With the ethos at Qualcomm being support a chip for 1 year, then move on, I have trouble believing they’ll update the drivers for a major windows release

          Google browbeat them for nearly 10 years, and then ended up going with the majority Samsung designed chip called Tensor just to compete against Apple in years of updates

  • hperrin@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    6 months ago

    This is all well and good, but what I really want is a Framework 2-in-1. That would be drool worthy.

    • Dudewitbow@lemmy.zip
      link
      fedilink
      English
      arrow-up
      53
      arrow-down
      1
      ·
      6 months ago

      its due to whoever paid for the R&D for the screen asked for rounded corners. Framework just took the design and retooled the connectors for their own use case, as its significantly cheaper than commissioning a entirely new panel.

      • Kairos@lemmy.today
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        24
        ·
        6 months ago

        Well time to not buy it. I have a pixel 7a and the rounded corners drive me nuts, not to mention the home punch idiocracy. The phone renders a rectangle show me a rectangle.

        • plenipotentprotogod@lemmy.world
          link
          fedilink
          English
          arrow-up
          42
          ·
          6 months ago

          If you were actually hoping to buy one but the rounded corners are a dealbreaker, then you may be interested to know that the DIY edition lets you mix and match the older display with the newer motherboards. Looks like opting for the older display even saves you $130 on the purchase price.

          • Kairos@lemmy.today
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            15
            ·
            6 months ago

            I wanted the higher resolutuon display for my existing framework but just… Why???

            • Dudewitbow@lemmy.zip
              link
              fedilink
              English
              arrow-up
              32
              ·
              edit-2
              6 months ago

              because theres a (very) expensive R&D attached to ordering a custom screen. its why many companies on the market use the same panel when making monitors. (e.g for gen 1 woled monitors from LG, LG, Asus, MSI(?) and i think Acer used the same panel.

              just “wanting a higher res screen” isnt something thats trivial to order for, especially since the FW13 uses a 3:2 screen , an aspect ratio usually used by tablets

              • Kairos@lemmy.today
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                7
                ·
                6 months ago

                Yes I know it’s an R&D thing I just dont get with the obsession with rounded screen corners.

        • thejml@lemm.ee
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          4
          ·
          6 months ago

          But hole punch cameras and round corners that go to the edges are so much better than a small bezel with square content! /s

          Drives me nuts as well.

          • tal@lemmy.today
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            6 months ago

            I completely understand that there are people who want the smallest phone and laptop possible and will happily trade all kinds of things for that, including an obstructed display, but I definitely am not in that camp.

    • charizardcharz@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      6 months ago

      There’s two display options, 2256x1504 60Hz without rounded coners and 2880x1920 120Hz with rounded corners.

      Specs are identical to the Surface Pro 11 and Framework said they are using an existing panel so they might be using the same panel, which makes it cheaper to develop since M$ would have paid for the development.

    • tedu@azorius.net
      link
      fedilink
      arrow-up
      24
      arrow-down
      1
      ·
      6 months ago

      What critical information are people putting in the six missing pixels?

  • Muffi@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    6 months ago

    I really hope they start shipping to Denmark soon. We’re such a tiny market we often get ignored or forgotten.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 months ago

    This is the best summary I could come up with:


    Prices start at $899 for a pre-built or DIY model (before you add RAM, storage, an OS, or a USB-C charger), or $449 for a motherboard that can be used to upgrade an existing system.

    But both NPUs fall far short of the performance required for Recall and other locally accelerated AI features coming to Windows 11 24H2 later this year; Framework’s blog post doesn’t mention the NPU.

    It has a matte finish and a 120 Hz refresh rate, and it costs $130 more than the standard display or $269 when bought on its own to upgrade an existing laptop.

    All of Microsoft’s Surface devices released within the last few years have also used rounded corners, and I haven’t found that it affects functionality at all.

    Other odds and ends include multicolor USB-C Expansion Cards that are color-matched to the colorful bezel options, an English International keyboard for Linux users with a “super” key in the place of the Windows logo, and a new 9.2-megapixel front-facing webcam module with low-noise microphones (Framework says this module doesn’t work at its native resolution but instead groups four pixels together into one to deliver better performance at 1080p).

    Framework has also added new configuration options for the Ryzen 7040 version of the Laptop 13 that include the new display and has lowered prices on those AMD configs and on "our remaining inventory of 13th-gen Intel Core systems.


    The original article contains 740 words, the summary contains 234 words. Saved 68%. I’m a bot and I’m open source!

    • mlaga97@lemmy.mlaga97.space
      link
      fedilink
      English
      arrow-up
      25
      ·
      6 months ago

      Realistically, the target audience are organizations as nowadays most business laptops are being carried between docking stations with the occasional meeting or air travel in-between and 13" is an excellent size to meet those needs.

      When hooked to a docking station, the screen size and keyboard is entirely irrelevant and modern laptop performance is…honestly crazy good.

      When in a meeting, it’s probably being either used to take notes fullscreen or show a presentation, so pretty neutral.

      Finally, when traveling, you can really can feel the difference between a 13" and a 15" when you’re running on too short of a layover between flights.

      • MethodicalSpark@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 months ago

        You nailed it. I’m the target audience for this and that is exactly how I use my laptop. Now if only framework would add a touchscreen option and I’d buy it tomorrow.

        • bobs_monkey@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          I’m right there with you. As silly as it is, I absolutely love the touchscreen on my Lenovo. I could live without it, sure, I don’t wanna. Once framework supports it, I’m there.

    • linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 months ago

      13 is a good “on call”/travel size. It’s not big enough to do serious work on but in a pinch it’s definitely big enough to get something done. It’s more comfortable on a flight, you can toss it a fairly small bag and take it with you. It’s lighter but can still manage a reasonable size keyboard. And when I get to my house or my job I’m plugging into external mouse and keyboard anyway.

      It’s not for everyone but my 13-in motherboard died about 2 months ago and I am definitely in the market. Now if I can just actually buy one of these we’ll see.

    • thejml@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      6 months ago

      To each their own. I’ve personally got a 14” MBP (which is physically the same size as my 13” was, they just have smaller bezels) and work provides me a 16” MBP. The 16 is unwieldy, massive, heavy, too large on my lap, barely fits in my laptop bag, and is a general pain to lug around. Every time I use it I’m reminded of how much I’m glad I got the 14”instead. I feel like the 16 is the worst of both worlds. It’s too big to truly be a portable, machine, but too small to do real work on. Sometimes I’ll think “I wish I had more screen real estate” on the 14, but I do on the 16 as well, so it doesn’t really solve the issue while also being large and heavy.

      In short, it depends on what you like, and what you need to do. Being an ultraportable is a big plus and there are monitors in most places I need more space anyway.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 months ago

      The actual screen volume is around that of a 14" 16:9 if that makes a difference

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      Everyone is different. My favorite computer is my 11" netbook because despite being slow it fits in any bag, fits in my side table, so light I can easily carry with one hand and not put undue pressure on my wrists, I can use most books as a lap desk, and I don’t have to clear off as much space on the table (I have two kids so it’s never clear).