• @excral@feddit.org
    link
    fedilink
    English
    182 months ago

    I don’t get the point. Framework laptops are interesting because they are modular but for desktop PCs that’s the default. And Framework’s PCs are less modular than a standard PC because the RAM is soldered

  • Billiam
    link
    fedilink
    English
    12 months ago

    So can someone who understands this stuff better than me explain how the L3 cache would affect performance? My X3D has a 96 MB cache, and all of these offerings are lower than that.

    • @brucethemoose@lemmy.world
      link
      fedilink
      English
      7
      edit-2
      2 months ago

      This has no X3D, the L3 is shared between CCDs. The only odd thing about this is it has a relatively small “last level” cache on the GPU/Memory die, but X3D CPUs are still kings of single-threaded performance since that L3 is right on the CPU.

      This thing has over twice the RAM bandwidth of the desktop CPUs though, and some apps like that. Just depends on the use case.

        • @brucethemoose@lemmy.world
          link
          fedilink
          English
          12 months ago

          Honestly CPUs are bad for AI, especially in this case where there’s a GPU on the same bus anyway.

          Of the top of my head, video encoding and compression/decompression really likes raw memory bandwidth. Maybe some games? Basically, wherever the M Pro/Max CPUs are really strong compared to the base M, these will excel in the same way.

  • @MonkderVierte@lemmy.ml
    link
    fedilink
    English
    10
    edit-2
    2 months ago

    and more at people who want the smallest, most powerful desktop they can build

    Well, there’s this:

    Yeah, the screw holes didn’t fit, that’s why. And the cooler didn’t fit the case, obviously. And the original cooler not the CPU’s turbo. It’s fine, it still runs most games in 3k on the iGPU.

  • @ganoo_slash_linux@lemmy.world
    link
    fedilink
    English
    232 months ago

    I feel like this is a big miss by framework. Maybe I just don’t understand because I already own a Velka 3 that i used happily for years and building small form factor with standard parts seems better than what this is offering. Better as in better performance, aesthetics, space optimization, upgradeability - SFF is not a cheap or easy way to build a computer.

    The biggest constraint building in the sub-5 liter format is GPU compatibility because not many manufacturers even make boards in the <180mm length category. Also can’t go much higher than 150-200 watts because cooling is so difficult. There are still options though, i rocked a PNY 1660 super for a long time, and the current most powerful option is a 4060ti. Although upgrades are limited to what manufacturers occasionally produce, it is upgradeable, and it is truly desktop performance.

    On the CPU side, you can physically put in whatever CPU you want. The only limitation is that the cooler, alpenfohn black ridge or noctua l9a/l9i, probably won’t have a good time cooling 100+ watts without aggressive undervolting and power limits. 65 watts TDP still gives you a ryzen 7 9700x.

    Motherboards have the SFF tax but are high quality in general. Flex ATX PSUs were a bit harder to find 5 or 6 years ago but now the black 600W enhance ENP is readily available from Velkase’s website. Drives and memory are completely standard. m.2 fits with the motherboard, 2.5in SATA also fits in one of the corners. Normal low profile DDR5 is replaceable / upgradeable.

    What framework is releasing is more like a laptop board in a ~4 liter case and I really don’t like that in order to upgrade any part of CPU, GPU or memory you have to replace the entire board because it’s soldered on APU and not socketed or discrete components. Framework’s enclosure hasn’t been designed to hold a motherboard+discrete GPU and the board doesn’t have a PCIe slot if you wanted to attach a card via riser in another case. It could be worse but I don’t see this as a good use of development resources.

    • @Acters@lemmy.world
      link
      fedilink
      English
      4
      edit-2
      2 months ago

      I think the biggest limiting factor for your mini PC will always be the VRAM and any workload that enjoys that fast RAM speed. Really, I think this mini PC from framework is only sensible for certain workloads. It was poised as a mobile chip and certainly is majorly power efficient. On the other hand I don’t think it is for large scaling but more for testing at home or working at home on the cheap. It isn’t something I expected from framework though as I expected them to maintain modularity and the only modularity here is the little USB cards and the 3D printed front panel designs lol

      Edit
      Personally I am in that niche market of high RAM speed. Also, access to high VRAM for occasional LLM testing. Though it is an AMD and I don’t know if am comfortable switching from Nvidia for that workload just yet. Renting a GPU is just barely cheap enough.

  • @4shtonButcher@discuss.tchncs.de
    link
    fedilink
    English
    172 months ago

    Now, can we have a cool European company doing similar stuff? At the rate it’s going I can’t decide whether I shouldn’t buy American because I don’t want to support a fascist country or because I’m afraid the country might crumble so badly that I can’t count on getting service for my device.

  • FireWire400
    link
    fedilink
    English
    16
    edit-2
    2 months ago

    This is not really that interesting and kinda weird given the non-upgradability, but I guess it’s good for AI workloads. It’s just not that unique compared to their laptops.

    I’d love a mid-tower case with swappable front panel I/O and modular bays for optical drives; would’ve been the perfect product for Framework to make IMO.

    • @brucethemoose@lemmy.world
      link
      fedilink
      English
      32 months ago

      They’d be competing with a bajillion other case makers. And I’m pretty sure there are already cases with what you ask (such as 5.25 bay mounted IO running off USB headers, at least).

      Like… I don’t really see what framework can bring making a case. Maybe it could be a super SFF mobo with a GPU bay, but that’s close to what they did here.

      • FireWire400
        link
        fedilink
        English
        12 months ago

        There may be already such a case but you and me have never heard about it and it’s probably by some chinese no-name brand.

        A proper metal mid-tower case with modular front panel I/O (using Framework’s system with the USB-C converters) and modular optical drive/hard drive bays would be unique.

    • @bluewing@lemm.ee
      link
      fedilink
      English
      32 months ago

      The mini’s are the latest new hotness for desktop computing. I’ve been running a dirt cheap $90US, mini for 2 years now. It fits extremely well on my desk, just tucked in under the monitor leaving plenty of room for all the other tasks I do daily.

      Will it play the latest hot new video game? Nope. But it will run OnlyOffice, FreeCAD and FreeDoom just fine.

    • @commander@lemmings.world
      link
      fedilink
      English
      -22 months ago

      It’s just not that unique compared to their laptops.

      This’ll be a good sell for the useful idiot crowd that has been conditioned to think gaming laptops are the devil.

          • @commander@lemmings.world
            link
            fedilink
            English
            12 months ago

            Lol. Are you nuts? Am I really supposed to sit here and list off what makes a great product for a great price?

            Let’s be real. You don’t like how I criticized how people like you are getting taken for a ride so you’re desperate to make it seem like it’s not true.

            The sooner you realize how you’re being taken advantage of, the sooner you can start to do something about it.

            • @brucethemoose@lemmy.world
              link
              fedilink
              English
              2
              edit-2
              2 months ago

              Am I really supposed to sit here and list off what makes a great product for a great price?

              I don’t understand what you are asking for.

              You don’t have to be extensive, but… what would you want instead? A more traditional Mini PC? A dGPU instead? A different size laptop? Like, if you could actually tell Framework what you want, in brief, what would you say?

              • @commander@lemmings.world
                link
                fedilink
                English
                1
                edit-2
                2 months ago

                Fair enough.

                I skimmed it for a few seconds, got a little bit ill at the $1100 starting price, and then it occurred to me: what is this for?

                Wasn’t framework’s whole thing about making modular laptops? What value are they bringing to the mini-ITX market? They’re already modular. In fact, it looks like they’re taking away customizability with soldered RAM.

                You asked me what I want, and this is definitely what I don’t want. If they wanted to make this product appealing to me, they’d have to lower the price and live more modest lifestyles with the more modest profit margins.

                Edit: After closer inspection (albeit, not that close so I may have missed something) it looks like this… thing doesn’t even have a dedicated GPU. Yeah, framework can suck my fucking balls lol.

                You can literally get a 4070 gaming laptop these days for ~$1000 and framework is trying to push this shit? They can fuck off so hard it’s not even funny. This is why the free world never has enough to go around, because we waste our excess on dumb shit like this.

                Here’s a gaming laptop with a 4070 and a 144hz screen for $900 at Walmart:

                https://www.walmart.com/ip/Lenovo-LOQ-15-6-FHD-144Hz-Gaming-Notebook-Ryzen-7-7435HS-16GB-RAM-512GB-SSD-NVIDIA-GeForce-RTX-4070-Luna-Grey-Octa-Core-Display-Ram/13376108763

                Fuck framework.

                • @brucethemoose@lemmy.world
                  link
                  fedilink
                  English
                  1
                  edit-2
                  2 months ago

                  This is ostensibly more of a workstation/dev thing. The integrated GPU is more or less like a very power efficient laptop 4070/4080 with unlimited VRAM, depending on which APU you pick, and the CPU is very fast, with desktop Ryzen CCDs but double the memory bandwidth of what even an 9800 X3D has. In that sense, it’s a steal compared to Nvidia DIGITs or an Apple M4 Max, and Mini PC makers alternatives haven’t really solidified yet.

                  I think Framework knows they can’t compete with a $900 Walmart laptop and the crazy bulk pricing/corner cutting they do, nor can they price/engineer things (with the same bulk discounts) at the higher end like a ROG Z13/G14.

                  So… this kinda makes sense to me. They filled a gap where OEMs are enshittifying things, which feels very framework to me.

  • @Blackmist@feddit.uk
    link
    fedilink
    English
    722 months ago

    Not really sure who this is for. With soldered RAM is less upgradeable than a regular PC.

    AI nerds maybe? Sure got a lot of RAM in there potentially attached to a GPU.

    But how capable is that really when compared to a 5090 or similar?

    • @brucethemoose@lemmy.world
      link
      fedilink
      English
      46
      edit-2
      2 months ago

      The 5090 is basically useless for AI dev/testing because it only has 32GB. Mind as well get an array of 3090s.

      The AI Max is slower and finicky, but it will run things you’d normally need an A100 the price of a car to run.

      But that aside, there are tons of workstations apps gated by nothing but VRAM capacity that this will blow open.

        • Amon
          link
          fedilink
          English
          32 months ago

          No, it runs off integrated graphics, which is a good thing because you can have a large capacity of ram dedicated to GPU loads

            • @brucethemoose@lemmy.world
              link
              fedilink
              English
              3
              edit-2
              2 months ago

              Most CUDA or PyTorch apps can be run through ROCM. Your performance/experience may vary. ZLUDA is also being revived as an alternate route to CUDA compat, as the vast majority of development/intertia is with CUDA.

              Vulkan has become a popular “community” GPU agnostic API, all but supplanting OpenCL, even though it’s not built for that at all. Hardware support is just so much better, I suppose.

              There are some other efforts trying to take off, like MLIR-based frameworks (with Mojo being a popular example), Apache TVM (with MLC-LLM being a prominent user), XLA or whatever Google is calling it now, but honestly getting away from CUDA is really hard. It doesn’t help that Intel’s unification effort is kinda failing because they keep dropping the ball on the hardware side.

      • @KingRandomGuy@lemmy.world
        link
        fedilink
        English
        232 months ago

        Useless is a strong term. I do a fair amount of research on a single 4090. Lots of problems can fit in <32 GB of VRAM. Even my 3060 is good enough to run small scale tests locally.

        I’m in CV, and even with enterprise grade hardware, most folks I know are limited to 48GB (A40 and L40S, substantially cheaper and more accessible than A100/H100/H200). My advisor would always say that you should really try to set up a problem where you can iterate in a few days worth of time on a single GPU, and lots of problems are still approachable that way. Of course you’re not going to make the next SOTA VLM on a 5090, but not every problem is that big.

        • @brucethemoose@lemmy.world
          link
          fedilink
          English
          3
          edit-2
          2 months ago

          Fair. True.

          If your workload/test fits in 24GB, that’s already a “solved” problem. If it fits in 48GB, it’s possibly solved with your institution’s workstation or whatever.

          But if it takes 80GB, as many projects seem to require these days since the A100 is such a common baseline, you are likely using very expensive cloud GPU time. I really love the idea of being able to tinker with a “full” 80GB+ workload (even having to deal with ROCM) without having to pay per hour.

          • @wise_pancake@lemmy.ca
            link
            fedilink
            English
            22 months ago

            This is my use case exactly.

            I do a lot of analysis locally, this is more than enough for my experiments and research. 64 to 96gb VRAM is exactly the window I need. There are analyses I’ve had to let run for 2 or 3 days and dealing with that on the cloud is annoying.

            Plus this will replace GH Copilot for me. It’ll run voice models. I have diffusion model experiments I plan to run but are totally inaccessible locally to me (not just image models). I’ve got workloads that take 2 or 3 days at 100% CPU/GPU that are annoying to run in the cloud.

            This basically frees me from paying for any cloud stuff in my personal life for the foreseeable future. I’m trying to localize as much as I can.

            I’ve got tons of ideas I’m free to try out risk free on this machine, and it’s the most affordable “entry level” solution I’ve seen.

            • @brucethemoose@lemmy.world
              link
              fedilink
              English
              2
              edit-2
              2 months ago

              And even better, “testing” it. Maybe I’m sloppy, but I have failed runs, errors, hacks, hours of “tinkering,” optimizing, or just trying to get something to launch that feels like an utter waste of an A100 mostly sitting idle… Hence I often don’t do it at all.

              One thing you should keep in mind is that the compute power of this thing is not like an A/H100, especially if you get a big slowdown with rocm, so what could take you 2-3 days could take over a week. It’d be nice if framework sold a cheap MI300A, but… shrug.

              • @wise_pancake@lemmy.ca
                link
                fedilink
                English
                32 months ago

                I don’t mind that it’s slower, I would rather wait than waste time on machines measured in multiple dollars per hour.

                I’ve never locked up an A100 that long, I’ve used them for full work days and was glad I wasn’t paying directly.

          • @KingRandomGuy@lemmy.world
            link
            fedilink
            English
            22 months ago

            Yeah, I agree that it does help for some approaches that do require a lot of VRAM. If you’re not on a tight schedule, this type of thing might be good enough to just get a model running.

            I don’t personally do anything that large; even the diffusion methods I’ve developed were able to fit on a 24GB card, but I know with the hype in multimodal stuff, VRAM needs can be pretty high.

            I suspect this machine will be popular with hobbyists for running really large open weight LLMs.

            • @brucethemoose@lemmy.world
              link
              fedilink
              English
              1
              edit-2
              2 months ago

              I suspect this machine will be popular with hobbyists for running really large open weight LLMs.

              Yeah.

              It will probably spur a lot of development! I’ve seen a lot of bs=1 speedup “hacks” shelved because GPUs are fast enough, and memory efficiency is the real bottleneck. But suddenly all these devs are going to have a 48GB-96GB pool that’s significantly slower than a 3090. And multimodal becomes much more viable.

              Not to speak of better ROCM compatibility. AMD should have done this ages ago…

        • @KeenFlame@feddit.nu
          link
          fedilink
          English
          12 months ago

          Exactly, 32 is plenty to develop on, and why would you need to upgrade ram? It was years ago I did that in any computer let alone a tensor workstation. I feel like they made pretty good choices for what it’s for

    • ArchRecord
      link
      fedilink
      English
      152 months ago

      For the performance, it’s actually quite reasonable. 4070-like GPU performance, 128gb of memory, and basically the newest Ryzen CPU performance, plus a case, power supply, and fan, will run you about the same price as buying a 4070, case, fan, power supply, and CPU of similar performance. Except you’ll actually get a faster CPU with the Framework one, and you’ll also get more memory that’s accessible by the GPU (up to the full 128gb minus whatever the CPU is currently using)

        • ArchRecord
          link
          fedilink
          English
          52 months ago

          “It’s too expensive”

          “It’s actually fairly priced for the performance it provides”

          “You people must be paid to shill garbage”

          ???

          Ah yes, shilling garbage, also known as: explaining that the price to performance ratio is just better, actually.

  • @Jollyllama@lemmy.world
    link
    fedilink
    English
    262 months ago

    Calling it a gaming PC feels misleading. It’s definitely geared more towards enterprise/AI workloads. If you want upgradeable just buy a regular framework. This desktop is interesting but niche and doesn’t seem like it’s for gamers.

  • @SuperSleuth@lemm.ee
    link
    fedilink
    English
    02 months ago

    What’s crazy is I still can’t make it onto their website without waiting in a 20 minute queue. Stupid.

  • @wise_pancake@lemmy.ca
    link
    fedilink
    English
    20
    edit-2
    2 months ago

    Question about how shared VRAM works

    So I need to specify in the BIOS the split, and then it’s dedicated at runtime, or can I allocate VRAM dynamically as needed by workload?

    On macos you don’t really have to think about this, so wondering how this compares.

  • @warmaster@lemmy.world
    link
    fedilink
    English
    172 months ago

    This is one stupid product. It really goes against everything the framework brand has identified with.

    • @ilinamorato@lemmy.world
      link
      fedilink
      English
      302 months ago

      Desktops are already that, though. In order for them to distinguish themselves in the industry, they can’t just offer another modular desktop PC. They can’t offer prebuilts, or gaming towers, or small form factor units, or pre-specced you-build kits. They can’t even offer low-cost micro-desktops. All of those markets are saturated.

      But they can offer a cheap Mac Studio alternative. Nobody’s cracked that nut yet. And it remains to be seen if this will be it, but it certainly seems like it’s lined up to.

      • @BeardedGingerWonder@feddit.uk
        link
        fedilink
        English
        22 months ago

        I’m not super well informed, but a socketable AMD nuc form factor machine would’ve been nice, single pcie, m.2 and 2 sodimm ram slots would’ve been good. Could’ve even given the option to route the pcie slot externally and offered an add on egpu case that’s actually worth a damn a la mega drive/sega cd.

    • @brucethemoose@lemmy.world
      link
      fedilink
      English
      35
      edit-2
      2 months ago

      I’d argue not. It’s as modular/repairable as the platform can be (with them outright stating the problematic soldered RAM), and not exorbitantly priced for what it is.

      But what I think is most “Framework” is shooting for a niche big OEMs have completely flubbed or enshittified. There’s a market (like me) that wants precisely this, not like a framework-branded gaming tower or whatever else a desktop would look like.

      • Ulrich
        link
        fedilink
        English
        -172 months ago

        It’s as modular/repairable as the platform can be

        It can’t be. That’s the point.