• Malygos_Spellweaver@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I would love to see the results of a 3D chip with a powerful iGPU. Not sure if it would work, but if it is possible, why is AMD not doing it? Would it cannibalise 100-200 eur GPU (they are already nonexistent anyway).

    • AnimeAlt44@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      There is very little demand for a powerful iGPU desktop chip, so the ones that exist are derivatives of laptop chips and thus monolithic. So far there has not been a stacked cache monolithic die chip.

      • Tired8281@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        It’s easy to say there no demand for something that doesn’t exist. Sales are zero.

        • AnimeAlt44@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          The laptop based desktop chips exist they are literally a thing and have been for a while. Both AMD and Intel have not seen high demand for those. Also even if that wasn’t the case, your argument is not really an argument at all since it can just be used to justify literally anything that hasn’t been tried.

          • Falconman21@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            I do think this will change quickly if Qualcomm’s ARM chips are as fast as the M2 Max like they claim. And there’s reason to believe it, as they’ve bought/hired Apple’s head of processor development.

            Considering the M2 Max GPU is roughly equivalent to a 3080 mobile or a desktop 3060ti at significantly better efficiency, I think the demand for monolithic could explode practically overnight.

            Assuming some x86 to ARM translation gets most things running.

            • TwelveSilverSwords@alien.topB
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              Maybe Qualcomm would do so in the future, but as things stand now, it’s not the case.

              The iGPU in the Snapdragon X Elite is on the same ballpark as the regular M2. Not the Pro or Max variant.

              In 3DMARK wildlife extreme, the X Elite GPU is 50% faster than Radeon 780M.

              https://youtu.be/03eY7BSMc_c?si=HbhQPDt-AN_PP_TS

              Still, that’s nowhere near 3080 tier.

              Qualcomm still needs to work on their Windows GPU drivers. Currently the only API the X Elite supports is DirectX12.

              Some speculate that Qualcomm will eventually create Windows Vulkan driver for Adreno. And then use DXVK to support older DirectX versions, and use Zink to support OpenGL.

                • TwelveSilverSwords@alien.topB
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 months ago

                  Are you talking about the Snapdragon X Elite? Sure, their mobile chips do have Vulkan drivers.

                  If you go to the Snapdragon X Elite Product Brief, you can see the only supported API is DX12.

      • INITMalcanis@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        There is very little demand for a powerful iGPU desktop chip

        There was little demand. Things change.

    • kif88@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      AMD is planning on the mi300 technically but that’s for enterprise and will cost tens of thousands.

    • Irisena@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      We’re still very far from that. Even mobile phones don’t stack GPU, they only stack RAM and NAND. RAM and cache are far simpler to stack since they are simple things in nature. While GPU is unbelievably complicated compared to those 2. Maybe Intel’s tile / AMD’s chiplet system is closer to what we want, but it’s still not as good as stacking.