Load Balanced Virtual Switches (Erl, Naserpour)
How can workloads be dynamically balanced on physical network connections to prevent bandwidth bottlenecks?
When network traffic on the uplink port for a virtual switch increases, it can cause delays, performance issues and packet loss because the affected virtual servers are sending and receiving traffic via only one uplink.
Network traffic is balanced across multiple uplinks between the virtual and physical networks.
Extra network interface cards are added to the physical host to accommodate the virtual switch that is configured with multiple physical uplinks.
Burst In, Burst Out to Private Cloud, Burst Out to Public Cloud, Cloud Authentication, Cloud Balancing, Elastic Environment, Infrastructure-as-a-Service (IaaS), Isolated Trust Boundary, Multitenant Environment, Platform-as-a-Service (PaaS), Private Cloud, Public Cloud, Resilient Environment, Resource Workload Management, Secure Burst Out to Private Cloud/Public Cloud, Software-as-a-Service (SaaS)
The addition of network interface cards and physical uplinks allows network workloads to be balanced.
NIST Reference Architecture Mapping
This pattern relates to the highlighted parts of the NIST reference architecture, as follows: