Skip to content

Running Multiple Docker Containers with High-Performance Networking on Jumbo Frames Limitations

Operating two instances of proprietary software within separate docker containers presents a challenge when high performance networking over extended distances at gigabit speeds is required, particularly where latency sensitivity and network MTU limitations intertwine. Here’s how to tackle these constraints using jumbo frames alongside Docker Networking without changing the containerized application itself:

Setting Up Your Host with Necessary Interfaces

Before diving into networking configuration, ensure your host has multiple Ethernet interfaces as follows:

- **Primary Interface (`bond0`)** - Located at `172.16.2.10` (with subnetworks for each container instance).
    ```plaintext
     bonds0 172.16.2.20, bond0:1  172.16.2.21
    ```
- **Fiber Interfaces (`em3`, `em4`)** - Direct connections for high latency traffic at gigabit speeds and separate IP assignments per instance; e.g., fiber connection one is dedicated to the first container, while fibre two serves as backup or additional load balancing:
    ```plaintext
     em3 172.16.3.10        # First Container's Fiber Interface (Latency Sensitive Traffic)
     em4 172.16.4.10       # Second Containers or additional load balancing/backup
    ```  
Note: The `bond` interface here simplifies the setup but might not be optimal for every situation; this approach is selected due to its simplicity and potential performance benefits on some infrastructures, given proper bonding configuration.

## Address Host Network Limitations 
Given Docker's bridge network driver limitation of MTU at a maximum of about `1500` (default), which does not suffice for our gigabit fiber connections requiring up to an MTU around `9000`, one must venture beyond the conventional networking methods provided by default.

### Overcoming Networking Constraints with Host Mode and Iptables 
The host network mode (`net=host`) can be leveraged, albeit without Docker's built-in container communication capabilities (like port mappings). To circumvent this:
1. **Enable `Host` networking for containers** to use the existing interfaces directly instead of a virtual bridge interface created by default with `--network host`. This approach will allow each instance of your application in their respective docker containers to communicate through standard network stack, but it comes without port-mapping or isolation benefits provided natively.
    ```bash
     # Example command (adjust as necessary for actual container setup)
        docker run --net=host [container options] -p 80:<port_number> myimage
    ```
2. **Utilize iptables** to redirect and masquerade incoming traffic on specific interfaces with designated ports, essentially reassigning connections dynamically according to the source/destination IP address or port number sent towards your containers' exposed services (such as a web GUI at `80`). This can be set up through custom rules:
    ```bash
     # Sample iptables rule setup for masquerading incoming traffic on container 1 from bond interface with specific ports to corresponding docker host interfaces and associated IPs. Adjust according to your exact scenario:
        iptables -t nat -A POSTROUTING -o em3+em4 -p tcp --dport <service_port> \
                   -j SNAT --to-source 172.16.3.<destination port>/80 # Apply for each incoming connection to redirect traffic appropriately into the corresponding container's interface and IP, mimicking a bonded network within Docker itself without actual bridge networking at MTU over `9000`.
    ```  
### Considerations When Using Host Mode with iptables: 
- Be mindful of firewall restrictions that may arise from using host mode. Ensure your ISP or NAT device permits such traffic flows if operating in a network not wholly controlled by the server hosting Docker and its interfaces, including masquerading ports as expected for container communication on external networks—this is essential when expecting latency-sensitive data to traverse public Internet paths (if applicable).
    - Additionally, always back up your current iptables rules before making significant changes. Have a plan in case of misconfiguration leading to unintended traffic routing or network access issues. 
    
### Final Remarks: Simplification vs Performance Trade-Offs and Limitations  
While this method effectively sidesteps Docker's bridge MTU limitation, it is not without its drawbacks such as a loss of port isolation between containers—a significant factor when considering container security best practices. Moreover, the interdependency on iptables means that any changes to network stack policies will directly impact your setup; hence robust testing and understanding are critical before implementation in production environments:
```markdown  
- This approach is a compromise allowing essential software operations over high MTU fiber interfaces but may not be ideal for containerized applications where isolation, port assignments based on networking abstractions (like Docker networks), or external network communications play vital roles. Always evaluate the security and operational trade-offs carefully when using host mode with iptables in lieu of traditional bridge/overlay solutions—consult documentation specific to your environment for further guidance where applicable.

By following these steps, you can successfully employ high MTU networking over fiber interfaces within Docker containers while maintaining necessary application connectivity and performance expectations; however, align closely with the operational requirements of using a non-isolated network stack—consider alternatives or customized solutions if isolation is paramount to your specific use case.


Previous Post
Why Does AWS Wrap Instance Profiles Around Roles
Next Post
Can Tolerations Be Added to kube fledged in Kube