The networking system has a max frame rate of 120hz by default. Inside UNetConnection::Tick it computes an estimated bandwidth by using that max frame rate, the engine's max frame rate, and the actual delta time. The engine's max frame rate can be a number similar to 120, or it can be 0 if the game is set to be uncapped and none of the default caps are applying like the background or laptop battery frame rate caps.
This behavior changed in 4.25 due to CL 9690077 (originally 9688733) from [Link Removed]. That change fixed the case where the engine max tick rate is lower than the net max tick rate, but appears to have broken the case where the engine max tick rate cap is 0. If EngineTickRate is 0, it gets set to MAX_flt, which then sets DesiredTickRate to the MaxNetTickRateFloat of 120.0f. Because DesiredTickRate is now never 0, BandwidthDeltaTime is always lowered from 0.033 in my local example (30 fps) to 0.00833 (120fps) when computing how many bits got sent. This reduces the available bandwidth to 1/4 what it should be, causing network saturation and very poor networking performance on listen servers.
It's not entirely clear what the correct behavior should be, but this did effectively cause a regression in 4.25 for default networking settings. The old behavior would cap BandwidthDeltaTime based on the engine MaxTickRate if it was set, which also did not handle consistent but low frame rates well
There's no existing public thread on this issue, so head over to Questions & Answers just mention UE-100223 in the post.