Thank you for watching! We’re writing all the time at work whether it’s emails, drafting up video scripts, etc. but having a tool like Grammarly will help improve your productivity and work more efficiently! It’s FREE, why not? Sign up for a FREE account and get 20% off Grammarly Premium: grammarly.com/LTT
Hey! I work in the aerospace engineering field and we recently purchased about 12 xenowulf servers each with 5 GPUs for simulation work. They cut down our fluid works simulation times from HOURS to single digit minutes. We used to have kind of like a whiteboard schedule/queue for when someone wants to use the servers, now it’s just hit send to queue, even with a team of 50 or so people using it, you can go make a coffee, come back and the data is ready to be analysed and looked at. Absolutely absurd
My high school geometry teacher originally went into the aerospace engineer field, but opted to teach high schoolers somehow. The man is am amazing guy and super smart too
The fan controller likely looks at component temperatures and adjusts pump speed as well. When components get hot you need to increase flow rates but may not need to increase fan speed. Fan speed would be entirely dependent on the coolant temperature.
Might have missed something, but as far as i can see the design of the machine forces air in from the front and pushes it to the back. Benchmarking it with the side panel off seems to me to mess up the cooling performance? Edit: Nevermind, saw the side panel go back on at 26:50
Amazon web services. Some or most VFX companies do have their own data centres/servers but cloud is used where bandwidth permits. For example I believe Weta sent the data for Avatar 2 to a number of Australian based AWS centres to be rendered as Weta's own render server wasn't up to task.... Or at least not while working on other projects at the same time. I have heard it was about 18PB of data.
@demoniack81 I don't see how your description equals "completely wrong". A MOSFET can have extraordinarily high voltage and do nothing to a person. I'll agree that it's an interplay--both matter.
You won't get a shock, but Linus was right - heating is current squared times the resistance, so you drop a screw in the wrong place and there will be a lot of hot. I have seen a spanner melt like a fuse on a big 24v truck battery when it hit bodywork while tightening a bolt on the busbar - welded itself to the bodywork then just melted. Not great for the battery.
@Rory Completely wrong. It doesn't matter how many amps the power supply is capable of providing, you need voltage to drive current through a given load. A car battery can easily deliver 800A while cranking, but it will never give you even the slightest shock, in the same way that it will not only _not obliterate_ a 230V light bulb (which can only handle 500mA at most), but in fact won't even get anywhere close to _turning it on._ Something like a capacitive dropper connected to a 230V supply might only be able to deliver 50mA, but it will still kill you because 30mA are enough to stop your heart.
For machines like this you should really consider benchmarking Neural Network training, that's one of their major use cases and a place where you can truly see the scaling
@TitaniumEye I remember the first time I saw a water cooler pc, it was like seeing a sci-fi/witchery computer with lots of tubes. Pretty cool how it is becoming more popular over the years.
@therodyman700 uh..no. There is a ton of competition. EK have peaked and are in decline...a number of brands far out perform EK these days They're also not an NA company...
And now they've come full circle, cutting corners and producing cheap defective products. It's sad actually, they need to invest more into R&D and quality control
17:30 - It would have been HYSTERICAL if he BRICKED IT when he was unplugging each power supply during windows update. (But, only because he wealthy enough to buy another one, and not even notice.)
he not wealthy enough to just buy another one- im pretty sure even for Linus 70k isnt just a simple do and done story - especially if you consider he just bought a whole new building and is equipping it with high end testing equipment, while still having 80 staff that need paychecks and so on and so forth :D
I love the way Linus says "check this out" like he knows he's about to blow your mind with some cool thing he just noticed about whatever project he's working on
@du ich try whatever the automotive parts retailer nearest you is, many have all the parts on a rack and you can pick one by sight, there will probably be some car that has a hose that is the shape you want, or can be cut down to be the shape you want. it's rubber and nylon string, it's easy to cut with a sharp knife.
That small 90 degree tube is probably an automotive part, that comes bent 90 degrees from the factory. (if so it's synthetic rubber reinforced with nylon string) I was disappointed that you guys didn't raid the cooling system parts bin at Princess Auto or Canadian Tire before your car radiator PC video, there would have been parts there that could have joined your hoses up without leaks,
Hey! I work in the aerospace engineering field and we recently purchased about 12 xenowulf servers each with 5 GPUs for simulation work. They cut down our fluid works simulation times from HOURS to single digit minutes. We used to have kind of like a whiteboard schedule/queue for when someone wants to use the servers, now it’s just hit send to queue, even with a team of 50 or so people using it, you can go make a coffee, come back and the data is ready to be analysed and looked at. Absolutely absurd
I remember rendering one picture (something like 4k) took whole one night or more on Core i5 3470. Rendering is not difficult if the model is not too complicated (or you have enough memory) and you have time to spare. But if not, you'll miss the deadline. You can also use cloud like Autodesk something or GitHub Actions (free but not designed for rendering). And I forgot, if you are student or bought expensive AutoDesk software or you know how to use Blender. Now I'm not.
6:50 I've been using EK's "old" aluminum 240mm fluid gaming kit since like 2017, and upgraded it to 240mm + 120mm, and even upgraded to ryzen from my old core i5 2320 that i started overclocking using this kit. It's been amazing all these years, and i change out the fluid once a year, havent had a problem with leaks, overheating, or any other problems you could have with old watercooling setups
I think this product is amazing and I'd want one... but i'm also thinking how awful and terrible this could be for artist because of how much crunch-time this can buy a studio. Sometimes you need obstacles and hard cut offs to pry things away from directors or producers. Nit-picking and redoing until the last minute does not always make a better film, and NEVER makes a good work environment for the artists.
I have Silicon Graphics Octane workstations that have similar connectors between the PSU and the front-plane which makes it super easy to change components without disassembling the machine. I'd love to see an LTT video that shows how nice some non-PC hardware is for maintenance. That would be sick
My favorite part about the screwdriver so far is their insistence that it is real and is coming soon(ish), which almost makes it seem less real if it weren't for all the prototypes and updates they've been showing us.
@Linus, if you wanna check the pure render without the assets loading time (on Blender), got to Render properties> Performance> Final Rendrer and enable Persistent Data, so the next time you ll run the benchmark, they will be loaded already (hurts the ram a liitle bit)
Chess player / enthusiast here > I would love to see some performance benchmarks from those CPU's running like a Stockfish Analysis of specific positions. My 10850k runs a 16 thread analysis at about 1200k/nps (Nodes per second). Imagine what this thing could do :O
11:47 looks like tubing from a car or industrial commercial hvac for factories/plants. So cool to see, and those PSUs are nifty as is the modularity if fails and just swap in new one 👍🏻
@Thierry Faquet And we are discussing corsair here, who as of late has not had a good track record with their products. I have officially switched out of everything corsair due to its poor quality. My headset died in 3 months, my buddies lasted about the same. My corsair case isn't doing so hot. Their internal products i used do alright, but feel kinda crummy for what I should be pulling (older build, but still should be faster and run cooler than this.) The rgb mousepads like to just randomly not work, like ever. The assessories I haven't had major issues with are the mouse and keyboards, but then again, if you can't do those right you don't deserve to be a company.
@drister007 Actually nope, EK is right. As long as your water temp is over the room temp you can exchange heat... That's really basic physic I can't see how corsair could say something that dumb. Having 3 rad stacked or one big of the same size is the same thing. Even corsair made plenty marketing builds PCs with multiple radiators inside a standard ATX case, which for all purpose are stacked radiators (having them 20 inches appart in a closed case doesn't make any difference than being 2 inches appart really you still have the same hot air inside). You can make the argument that a triple rad build can have 2 as intake and 1 as Exhaust. But realistically, while technically better, the temp difference would be minimal. And that's not an option in a server blade, which is the point of this video.
well i think its "it doesn't make it any cooler" but if you need more surface area in that configuration its not any hotter and takes up far less space.
Linus -- time is exponentially improved. Every halving of time is a doubling in speed. For it to do things in 1 minute compared to things that almost took 4 means it's not 4x times faster, its 8x faster. Every gain you witnessed was actually double the value you were weighing it in your head. If you think of it like driving, in order to go the same distance you can go 60 mph in 4 hours in 1 hr, that would mean youd need to go 480 mph. When you consider it was doing this with only 5 gpus vs the 8 it should otherwise take -- even moreso. Thats before considering it was also doing it at less total power draw than 8 gaming systems in parallel would draw (2.4kW vs 4.8kW [assuming avg 600W draw on parallel gaming machines when GPU pegged]). That adds up really quick because if youre the type of company to buy one of these things, it's probably running 24/7 other than when you're switching jobs or updating things. It probably pays for itself in a handful of years just from that alone.
A few versions ago blender added stuff to improve performance for multi frame renders and repeated rendering. So if you render a short animation the impact should be even higher
Hey LTT, if you like that controller for the water cooling, check out the Aquaero 6 LT ... same idea (onboard SoC running things) and Aquasuite allows you to do some funky tuning of fans and pump based on sensor inputs.
In Animation, when dealing with these kinds of rendering speeds you'd probably want to optimize the scene and make sure that the 3D assets are cached in memory, so they don't need to be loaded again and again for every single frame. Once the assets are loaded and cached, you I only need to wait for the actual rendering to finish, so even the 60s gooseberry render would drop to 15sec l
@Simon K. When the scene is on close to final is commonly used when adding small objects, slightly moving items or comparing how materials/shaders/nodes interact with lightning. It can be a huge time saver don't underestimate it. Some engines let you pick which objects can be baked you don't need to bake everything.
@Fractal Paradox i would also like to clarify that when you say simulation you mean physics simulation (cloth, particles, fluids, etc...) the reason why it is used that much for this specific case is that if the "geometry interactions" are converted into key frames the result will also be the desired one. If we run the simulatons again and again the precision of the "interactions" is not exact thus making things like marble machines close to impossible to create in a digital environment.
Things don't get rendered sequentially in one cpu/gpu, we use distributed rendering, where we fire all the frames together on multiple servers at once, so none of those optimizations you mention are used in a real big scale production environment.
that's totally insane that it can render multiple frames of animation per second. on toy story 1, each frame took more than 24 hours to render. with this machine, that whole movie could render in under 6 hours. that's nuts.
Nice. I always come back if i have the feeling i did something good. This brings me down to earth and reminds me my PC is utter crap and i have to be ashamed of it
Stacking radiators does in fact work just fine as long as there's enough airflow. I have a system that runs on 2 single-120 radiators sandwiched around a single noctua ippc-3000 fan. It holds temp just fine but does get a bit noisy if it's running hard.
23:49 Linus definitely already knows why, but if anyone else is curious: Just connecting a bunch of PCs is actually a thing (it's called distributed computing) but it's so much more difficult than just connecting a few PCs. The performance of individual processors (or entire computers) doesn't really matter, the thing that has the largest effect on the performance/efficiency of a system with this many processors- and the cause of the largest amount of difficulty- is coordinating them. You could have a bunch of the fastest CPUs that money can buy, but if they can't properly communicate and work together then the system might just be so incredibly inefficient that it ends up being slower than a different system with a fraction of the cost. It's kind of like 20 musicians being able to play an instrument, but forming an orchestra would be difficult and any music they try to perform would just sound like meaningless noise. The architecture/organisation of distributed systems (or even just a single, multi-processor system) and the resulting concurrent computation concerns is an interesting topic, but it's honestly just a massive headache that most people shouldn't want to think about.
23:53 So I actually work for an animation studio. We do use gaming components for individual machines simply because its cost-effective. When it comes to studio-wide rendering though, Epycs are the way to go
@Stefan Werner But the files only load once, then the job runs for hours on end. So yes you might have a few mins of congestion up front, but In the end, network is a small part, and ten 1 core machines is faster than one 10 core machine because the 10 core machine is really only 5-6 times faster than the one core machine.
@Little Shop of Random Different from what? In house rendering was about 800 computers trying to load the same files simultaneously, with similar problems.
@Stefan Werner The 64 core machine will be slower in most cases. These things do not scale linearly, and with most renderings being several minutes to several hours per frame or pass, network congestion does not come in to play really. You still need a good network config, but the machines are not fighting. Data gets loaded into ram once and then it is mostly idle on the network (there are some exceptions of course).
A "completely exposed" 12v supply capable of high current. The last thing I was expecting was that his point was that was somehow dangerous. Does linus not drive a car? Has Linus never jump started a car?
Wish I understood more! I'm 60 and remember my first computer experience, ticker tape, sheets of paper, a large room to house the "computer" with humans loading programs for ten minutes!! And now, this small box ...with water cooling. In only 50 years!!!
I love how Linus has to just jump in every time without reading any instructions, and disregarding recommendations from manufacturers.... then breaks something
9:17 In the automotive worls, we actually "stack" "heat exchangers" all the time. Radiators, AC condensers, oil coolers and even front-mount intercoolers all get stacked quite often. Works just fine.
@-Puff Could be someone's coding practice or something, this application isn't really the issue, and some of the bots like the 'fuk wat u sayin' ones are pure gold
@Paul Hughes Well unfortunately, all those are, are bots. They sell the accounts on Forums, and make a profit. Sad linus has helped them. Over 900 more subs since that heart.
@Coaxill It can do, depends on the specific arrangements, and the cooling demands. Normal practic is to put the coolers that need to provide the lowest temperatures to the front. With multi-pass coolers, the hottest would be placed at the rear with the coldest to the front - ie, inlet at back from the heat source, outlet at the front to the heat source.
Coming from the automotive world, seeing aluminum radiators and formed rubber hoses is completely normal haha. Aluminum conducts heat very well and dissipates it very fast too, basically all high performance automotive radiators are aluminum. They have to keep an internal combustion engine cool at temps above the boiling point of water, meaning the hoses have to hold at least 1 bar of pressure as well. So basically all cooling hoses or heater hoses are reinforced with either steel, nylon or kevlar mesh. That hose at @11:30 looks like any old automotive heater hose you'd see coming off a thermostat housing on most japanese cars from the 80s on up, kinda neat to see!
27:06 first time that i did that on a server i was both scared and amazed ... feels weird but at the same time it's really pleasant to know and rzalise that if you lost one PSU your entirer server won't just shuts off and risk data corruptions ... that's great
When you have ample volume to sink heat it you can use aluminum, copper is barely any better at heat conductivity, but it costs more and is more reactive, expressive and heavier.
Copper/brass coolant systems are actually the middle budget option. Aluminum fins can be much thinner which results a much higher surface area for the radiators. You can also use a much thinner layer of metal on the bottom of the water block. Copper is used because it allows for cruder manufacturing. What you often see in consumer PC parts is thick machined aluminum which is worse than copper.
That was your best video you've done, so far! Mind you thats coming from a 20yrs plus SFX Artist. :) Amazing render power! Puts my duel Xeon to shame. :]
looking through a backlog here. I remember in 2008 when I did 3D stuff at a school. my last project was a scene render in poster size. I used raytracing for it and I remember having to split up the render to 4 of our computers because otherwise I wouldn't be done in time. the render was 23 hours or so for each of those 4 computers... And now, raytracing is done in frames per second.... jesus :O
Now that'd be interesting. I do know they have someone in The Lab who's working on a really intence GPU test, but that's more for testing singular GPUs for gaming.
I dream of turning one of these enterprise type high processing servers into a normal home PC Is it possible? Yes with alot of bodgeing Is it a good idea? Guess I'll figure that out when I'm in crippling debt from trying it
"seconds per frame, and not that long before that, minutes per frame" Linus, remember that in our lifetime, the first computer animated movie (Toy Story) was made. Rendering that movie took an average of *seven hours per frame*, with a range between 45 minutes and 30 hours depending on the complexity. That server alone is 2000-3000x faster than the 117 computers that Toy Story was rendered on *combined*. It's an *insane* machine. 30 years of innovation go brr.
A bit of context for non-animation friends here, but shaving off 3 minutes from a 3:40 render for a single frame is huge since we usually render hundreds to thousands of frames for a single shot/batch render, so that's 4/5 of your render time removed. If your 12900k takes 5 hours to render a job, that's reduced to 1 hour.
I absolutely love watching you guys discover enterprise grade hardware/ways of doing things in recent videos. Something to consider (talking about that fan controller not being able to interact with the system directly) - management connections/items are usually handled in a completely separate network section (considered "Out Of Band", whether it be separate switches or just separate vlans). Definitely helps separate data traffic from management traffic and potential vectors. Please keep up the amazing content!
Quite! The guys and gals using it don't care about the actual hardware just that it does the job they're using it for, whereas The Admin' actually looking after the hardware don't care what it's used for.
I want to see a server or regular PC submerged in 3M Novec. They are always on display, in tech conventions ... but it should be time for consumer release, in the 2020s or 2030s.
This kind of tubing is actually pre formed. Also comes from the automotive industrie but ist also used in other industries, like the one i'm working in (heatpumps).
I think the radiator assembly is working because optimized dT of air/water. Hottest water can circulate in hottest air and still gets a decent dT and efficiency. Smurt boys I say :D Most interesting is how the F did they balance those parallel tubings to allow equal flow for each consumer.
This video is so amazing. It peek into what computer / computational works business operations spend around 50-100k USD. I can ballpark mechanical manufacturing machine. But tech beast, is another beast I never know!
I bought the D5 pump (Laing D5 vario) currently in my PC from DangerDen in 2004. It's turning 18 years old this October. The potentiometer failed about 8 years ago, so the pump was stuck at the lowest RPM setting, so I just soldered a jumper wire and bypassed the pot so it always runs at max RPM, and it just keeps on truckin'. The key to keeping a D5 around indefinitely is running ethylene glycol based automotive coolant. I use Valvoline Zerex at a 10/90 ratio to reverse osmosis filtered water. Also, never run any of the PC specific premix sold by Koolance, EK etc. if you want your D5 to last. They're primarily based on propylene glycol and lack the lubricating additives that automotive coolants have, and most of them start breaking down after 6-12 months and gunking up the pump if you're not religious about changing it out. In comparison, I went 8 years running the same Valvoline Zerex mixture, and my loop still looked brand new inside when I overhauled it.
It would be cool to see like 10 or 20 year old beasts, their price back then and with inflation to up about today and what they can be compared with today from a performance standpoint... I'd like to see that.
nice 👍, I didn't even watch the video yet and I'm not even going to. it's just good to see you've reached the top 1%. finally. made it to where your audience isn't the normal folks, but the people who can actually afford the awesome things you show.
if you guys would render an image sequence instead of a frame you could see the acutal render time without asset loading time. Usually asset loading itme is a 1 time occurance that happens at the beginging of the render sequence and is almost all base on drive speed and io speed. Once loaded, its loaded in ram is way more effecient and you will see more consistent render times.
23:52 The reason gaming hardware mostly doesn't cut it is VRAM. These pro GPUs can hit hundreds of Gigabytes Nvilinked while gaming hardware will max out at 48GB with 3090s. VRAM is critical for GPU rendering as production scenes can hit 150GB + in VRAM.
@Mitchell Anderson They did not specify but i know they got sponsored by either Google or Nvidia, they maybe provided acces to a render farm etc. But i dont think they used rtx quadro.
if you really want to push its limits you should do an animation rendering in blender. i am still to find a proper system for the renderings i have tried doing. i have crashed and burnt quite a lot of systems trying to figure out what will work best with blender animations
That is so funny to me being a car guy. @linus the tube that is bent at a tight 90 and holds its shape still is automotive grade. On automotive rads deanse rubber is combined with a cevlar helical thread layout so that you can have increadably tight and complex bends and no distortion. glade that i can help enlighten!!
A few things to mention that might make the benchmarks look less impressive: There's a lot of data that is reused for followup frames when rendering an actual animation. That stuff is cached, so frame by frame it would be faster. It's all (or at least mostly) CPU-bound and requires pulling a lot of data, so the bottleneck is not the GPU's. That is likely how the "frames per second" were achieved.
the whole "frames per second" thing for animation studios is mind boggling. for reference, it took on average 29 hours to render each frame of monsters inc. 29 hours. to get 1/30th of a second of footage.
@DistroHopper39B At a Red Hat conference years ago DreamWorks did a presentation about their render farm. The presentation was more geared to their massive storage solution (exabyte scale nfs) but it was mentioned they used around 10,000 cpu cores at the time. Per project. For on average 6 months each. With 12 in the pipeline at a time.
I got involved with computer editing and animation over 4 decades ago using a Commodore Amiga 2000 with a Video Toaster. A very simple Lightwave render of an array of colored balls in 720x480 (standard definition) would usually run 8 hours+ at high quality settings. When the A4000 was released it cut that render time in half.