While I’ve been working on this series on IPv6, my thinking has evolved. It’s clear from reading RFC 7421 that the fixed 64-bit length of the interface ID is now so baked into IPv6 that there’s no point in debating it. If sixteen bits (or rather four nibbles) isn’t enough to encompass a site’s topology, then I accept the assurances (for example in Tom Coffeen’s IPv6 Address Planning) that a global prefix that’s shorter than the standard /48 can be obtained (I still think that the size of the IPv6 address space is over-hyped).
IPv6 has two ways of automating addressing and configuration: DHCPv6 and SLAAC. In practice neither are complete solutions, and they have to be combined in various ways; the standards are still evolving. Something as straightforward as mapping an IP address to a MAC address becomes much harder with IPv6. Such a complex and messy environment is not conducive to good security.
However, the biggest problem I have with IPv6 is the assumption that it means a return to end-to-end addressing. The original purpose of Network Address Port Translation (NAPT) was to stretch the IPv4 address space, and the massively increased address space of IPv6 makes that no longer necessary. The crude one-way filter effect of NAPT needs to be replaced with a proper firewall. Nevertheless, NAPT has had other benefits too, such as:
Host identity hiding
Network identity hiding
Topology hiding
Simplification of provider change when using provider-assigned addresses
Simplification of multi-homing when using provider-assigned addresses
Various attempts have been made to try and replicate these benefits in IPv6: temporary addresses, “local network protection” (using host routes or Mobile IP), ULAs, NPTv6 and multi-homing via DHCPv6. In chasing the end-to-end addressing dream these have all added complexity to the endpoint device (difficult to manage on an enterprise scale); all, that is, apart from NPTv6, which is a form of NAT.
IPv6 enthusiasts often express a desire to return to the time in the early days of the Internet when end-to-end addressing was feasible. I think that the world has moved on since then: the Internet is no longer as simple or as safe, and we need to adapt our architectures accordingly. We should not assume that end-to-end addressing is always the end goal. When applying the end-to-end principle to the enterprise, it makes more sense to think of the endpoint as lying at the enterprise boundary, where complexity can be more easily handled.
Now the end-to-end principle is fundamental to the architecture of the Internet. It owes a lot to the work of Louis Pouzin on the CYCLADES network: his insight was that functions like reliability of transmission and virtual circuits were best handled at the endpoints of a connection, leaving the network to simply shift packets around, without worrying about reliability or even the order in which the packets arrived at their destination. You might summarise this as “smart hosts, dumb networks”. To illustrate this, let’s imagine a set of ping-pong balls that spell out the word “hello”. If I drop them through a wooden box that splits them up and sends them through different paths before they fall out of the bottom (a bit like a bean machine only outputting a single sequence of balls), then I might end up with a set that says “loleh”. I would then put them back in the correct order (perhaps by using a sequence number on the back of each ball). If the box lost a ball I would arrange for retransmission of the lost ball. That’s basically how TCP/IP creates virtual connections over an unreliable datagram network.
The end-to-end principle has been very important in allowing the Internet to scale up to the size it is today: routers are (relatively) simple devices that can be added into the network to form a mesh, through which individual datagrams (the segments of a conversation) can take various paths to their destination. Crucially it has also made the Internet much more robust: if there is a problem at any point in the path between two hosts, then datagrams can route themselves around the obstacle and find an alternative path.
This principle has important consequences if we require applications to survive partial network failures. An end-to-end protocol design should not rely on the maintenance of state (i.e. information about the state of the end-to-end communication) inside the network. Such state should be maintained only in the endpoints, in such a way that the state can only be destroyed when the endpoint itself breaks (known as fate-sharing). An immediate consequence of this is that datagrams are better than classical virtual circuits. The network’s job is to transmit datagrams as efficiently and flexibly as possible. Everything else should be done at the fringes.
If a particular node or set of nodes on a link has to maintain the state of the connection, then datagrams can’t route themselves round trouble. However this argument only really applies in the Default-Free Zone (DFZ), the meshed heart of the Internet where there is no default route and datagrams have a multiplicity of routes to their destination. Stub networks in general (and enterprise networks in particular) are more like the branch of a tree than a mesh: there is generally only one well-defined path out to the Internet. That path out to the Internet typically goes through an NAPT device; if that NAPT device fails, then packets have no alternative path to take anyway.
Now it’s true that if the link to the Internet went through a stateless device then it could fail and then recover, and the endpoints could continue their conversation where they had been interrupted, assuming that the application hadn’t timed out in the meantime. When a NAPT device fails and then recovers then the NAPT state has been lost and all the connections going through that device are broken (this assumes that the NAPT device is not clustered in a way that maintains NAPT state during a failover). However, this argument holds true of any network device that holds connection state, stateful-inspection firewalls for example, although the anti-NATters rarely make this explicit.
In fact (at the enterprise level at least) NAPT devices are nearly always stateful-inspection firewalls as well. The anti-NAPT argument often refers to the performance overhead of maintaining NAPT state, but stateful-inspection firewalls have to maintain the state of permitted connections anyway, and it would surprise me if the internal architecture didn’t combine the two functions. The whole point of stateful-inspection firewalls is that they improve performance, by avoiding the need to test every datagram against the firewall’s ruleset.
In reality there are many stateful network devices at the modern enterprise perimeter: not just firewalls/NAPT devices, but intrusion prevention systems, web proxies, load balancers and other reverse proxies. They all violate the end-to-end principle, and they all have to devote resources to maintaining state, but the security and performance benefits that they provide outweigh the loss of resilience. It’s a pragmatic compromise: architectural principles are fine as long as you don’t lose sight of the bigger picture.
In the next post I’ll look at the impact of NAT on applications.