Probably a very basic question but confused the hell out of me - say if I have 100mb internet at home, and scenario one, a router with 100mb port speed and I connect two PCs to it, each has a 100mb NIC card, is it true that ignoring other factors I should be able to get close to, if not 100mb connection on each of the PCs? On the other hand, scenario 2, if I have a (unmanaged) switch and I connect the PCs to the switch I would only ended up getting 50mb each on each of the PCs (i.e., the switch essentially “halved” my internet speed if I connect 2 PCs to it, 1/3 if I connect 3 PCs to it, etc)?
The first part is correct, mostly it isn’t “much much” higher because it is wasted performance but you could hace a 24port switch with the CPU of a 48 port switch and you could have over 50GBit internal switching bandwidth for the 24 ports.
The Second part is a bit strange for me. Probably because of the wording. When you say modem it is probably already a router because you have multiple LAN ports. A Modem normaly only supports 1 WAN and 1 LAN Port at consumer level devices.
You can have routers behind routers but unless manually configured correctly on the main router and the 2 routers WAN the 2 LAN Networks behind each router can’t reach each other, like you cant easily reach your neighbors PC unless he opens a connection to it specifically.
Whats the model number of the thing you called a Modem?
Can you extend on what you meant when you statet “which I assume would be minimum”
And without and even in some cases with expensive load balancers you cant say stuff like “use bandwidth that is left by IoT” for example. It is very random who gets more or less bandwidth of a connection when it is at full capacity. Because of the way TCP was designed in the beginning. Resiliency was much more important than fairness ;)