71 stories
·
3 followers

Internet protocols are changing | APNIC Blog

1 Comment and 2 Shares

When the Internet started to become widely used in the 1990s, most traffic used just a few protocols: IPv4 routed packets, TCP turned those packets into connections, SSL (later TLS) encrypted those connections, DNS named hosts to connect to, and HTTP was often the application protocol using it all.

For many years, there were negligible changes to these core Internet protocols; HTTP added a few new headers and methods, TLS slowly went through minor revisions, TCP adapted congestion control, and DNS introduced features like DNSSEC. The protocols themselves looked about the same ‘on the wire’ for a very long time (excepting IPv6, which already gets its fair amount of attention in the network operator community.)

As a result, network operators, vendors, and policymakers that want to understand (and sometimes, control) the Internet have adopted a number of practices based upon these protocols’ wire ‘footprint’ — whether intended to debug issues, improve quality of service, or impose policy.

Now, significant changes to the core Internet protocols are underway. While they are intended to be compatible with the Internet at large (since they won’t get adoption otherwise), they might be disruptive to those who have taken liberties with undocumented aspects of protocols or made an assumption that things won’t change.

Why we need to change the Internet

There are a number of factors driving these changes.

First, the limits of the core Internet protocols have become apparent, especially regarding performance. Because of structural problems in the application and transport protocols, the network was not being used as efficiently as it could be, leading to end-user perceived performance (in particular, latency).

This translates into a strong motivation to evolve or replace those protocols because there is a large body of experience showing the impact of even small performance gains.

Second, the ability to evolve Internet protocols — at any layer — has become more difficult over time, largely thanks to the unintended uses by networks discussed above. For example, HTTP proxies that tried to compress responses made it more difficult to deploy new compression techniques; TCP optimization in middleboxes made it more difficult to deploy improvements to TCP.

Finally, we are in the midst of a shift towards more use of encryption on the Internet, first spurred by Edward Snowden’s revelations in 2015. That’s really a separate discussion, but it is relevant here in that encryption is one of best tools we have to ensure that protocols can evolve.

Let’s have a look at what’s happened, what’s coming next, how it might impact networks, and how networks impact protocol design.

HTTP/2

HTTP/2 (based on Google’s SPDY) was the first notable change — standardized in 2015, it multiplexes multiple requests onto one TCP connection, thereby avoiding the need to queue requests on the client without blocking each other. It is now widely deployed, and supported by all major browsers and web servers.

From a network’s viewpoint, HTTP/2 made a few notable changes. First, it’s a binary protocol, so any device that assumes it’s HTTP/1.1 is going to break.

That breakage was one of the primary reasons for another big change in HTTP/2; it effectively requires encryption. This gives it a better chance of avoiding interference from intermediaries that assume it’s HTTP/1.1, or do more subtle things like strip headers or block new protocol extensions — both things that had been seen by some of the engineers working on the protocol, causing significant support problems for them.

HTTP/2 also requires TLS/1.2 to be used when it is encrypted, and blacklists cipher suites that were judged to be insecure — with the effect of only allowing ephemeral keys. See the TLS 1.3 section for potential impacts here.

Finally, HTTP/2 allows more than one host’s requests to be coalesced onto a connection, to improve performance by reducing the number of connections (and thereby, congestion control contexts) used for a page load.

For example, you could have a connection for <a href="http://www.example.com" rel="nofollow">www.example.com</a>, but also use it for requests for <a href="http://images.example.com" rel="nofollow">images.example.com</a>. Future protocol extensions might also allow additional hosts to be added to the connection, even if they weren’t listed in the original TLS certificate used for it. As a result, assuming that the traffic on a connection is limited to the purpose it was initiated for isn’t going to apply.

Despite these changes, it’s worth noting that HTTP/2 doesn’t appear to suffer from significant interoperability problems or interference from networks.

TLS 1.3

TLS 1.3 is just going through the final processes of standardization and is already supported by some implementations.

Don’t be fooled by its incremental name; this is effectively a new version of TLS, with a much-revamped handshake that allows application data to flow from the start (often called ‘0RTT’). The new design relies upon ephemeral key exchange, thereby ruling out static keys.

This has caused concern from some network operators and vendors — in particular those who need visibility into what’s happening inside those connections.

For example, consider the datacentre for a bank that has regulatory requirements for visibility. By sniffing traffic in the network and decrypting it with the static keys of their servers, they can log legitimate traffic and identify harmful traffic, whether it be attackers from the outside or employees trying to leak data from the inside.

TLS 1.3 doesn’t support that particular technique for intercepting traffic, since it’s also a form of attack that ephemeral keys protect against. However, since they have regulatory requirements to both use modern encryption protocols and to monitor their networks, this puts those network operators in an awkward spot.

There’s been much debate about whether regulations require static keys, whether alternative approaches could be just as effective, and whether weakening security for the entire Internet for the benefit of relatively few networks is the right solution. Indeed, it’s still possible to decrypt traffic in TLS 1.3, but you need access to the ephemeral keys to do so, and by design, they aren’t long-lived.

At this point it doesn’t look like TLS 1.3 will change to accommodate these networks, but there are rumblings about creating another protocol that allows a third party to observe what’s going on— and perhaps more — for these use cases. Whether that gets traction remains to be seen.

QUIC

During work on HTTP/2, it became evident that TCP has similar inefficiencies. Because TCP is an in-order delivery protocol, the loss of one packet can prevent those in the buffers behind it from being delivered to the application. For a multiplexed protocol, this can make a big difference in performance.

QUIC is an attempt to address that by effectively rebuilding TCP semantics (along with some of HTTP/2’s stream model) on top of UDP. Like HTTP/2, it started as a Google effort and is now in the IETF, with an initial use case of HTTP-over-UDP and a goal of becoming a standard in late 2018. However, since Google has already deployed QUIC in the Chrome browser and on its sites, it already accounts for more than 7% of Internet traffic.

Read Your questions answered about QUIC

Besides the shift from TCP to UDP for such a sizable amount of traffic (and all of the adjustments in networks that might imply), both Google QUIC (gQUIC) and IETF QUIC (iQUIC) require encryption to operate at all; there is no unencrypted QUIC.

iQUIC uses TLS 1.3 to establish keys for a session and then uses them to encrypt each packet. However, since it’s UDP-based, a lot of the session information and metadata that’s exposed in TCP gets encrypted in QUIC.

In fact, iQUIC’s current ‘short header’ — used for all packets except the handshake — only exposes a packet number, an optional connection identifier, and a byte of state for things like the encryption key rotation schedule and the packet type (which might end up encrypted as well).

Everything else is encrypted — including ACKs, to raise the bar for traffic analysis attacks.

However, this means that passively estimating RTT and packet loss by observing connections is no longer possible; there isn’t enough information. This lack of observability has caused a significant amount of concern by some in the operator community, who say that passive measurements like this are critical for debugging and understanding their networks.

One proposal to meet this need is the ‘Spin Bit‘ — a bit in the header that flips once a round trip, so that observers can estimate RTT. Since it’s decoupled from the application’s state, it doesn’t appear to leak any information about the endpoints, beyond a rough estimate of location on the network.

DOH

The newest change on the horizon is DOH — DNS over HTTP. A significant amount of research has shown that networks commonly use DNS as a means of imposing policy (whether on behalf of the network operator or a greater authority).

Circumventing this kind of control with encryption has been discussed for a while, but it has a disadvantage (at least from some standpoints) — it is possible to discriminate it from other traffic; for example, by using its port number to block access.

DOH addresses that by piggybacking DNS traffic onto an existing HTTP connection, thereby removing any discriminators. A network that wishes to block access to that DNS resolver can only do so by blocking access to the website as well.

For example, if Google was to deploy its public DNS service over DOH on <a href="http://www.google.com" rel="nofollow">www.google.com</a> and a user configures their browser to use it, a network that wants (or is required) to stop it would have to effectively block all of Google (thanks to how they host their services).

DOH has just started its work, but there’s already a fair amount of interest in it, and some rumblings of deployment. How the networks (and governments) that use DNS to impose policy will react remains to be seen.

Read IETF 100, Singapore: DNS over HTTP (DOH!)

Ossification and grease

To return to motivations, one theme throughout this work is how protocol designers are increasingly encountering problems where networks make assumptions about traffic.

For example, TLS 1.3 has had a number of last-minute issues with middleboxes that assume it’s an older version of the protocol. gQUIC blacklists several networks that throttle UDP traffic, because they think that it’s harmful or low-priority traffic.

When a protocol can’t evolve because deployments ‘freeze’ its extensibility points, we say it has ossified. TCP itself is a severe example of ossification; so many middleboxes do so many things to TCP — whether it’s blocking packets with TCP options that aren’t recognized, or ‘optimizing’ congestion control.

It’s necessary to prevent ossification, to ensure that protocols can evolve to meet the needs of the Internet in the future; otherwise, it would be a ‘tragedy of the commons’ where the actions of some individual networks — although well-intended — would affect the health of the Internet overall.

There are many ways to prevent ossification; if the data in question is encrypted, it cannot be accessed by any party but those that hold the keys, preventing interference. If an extension point is unencrypted but commonly used in a way that would break applications visibly (for example, HTTP headers), it’s less likely to be interfered with.

Where protocol designers can’t use encryption and an extension point isn’t used often, artificially exercising the extension point can help; we call this greasing it.

For example, QUIC encourages endpoints to use a range of decoy values in its version negotiation, to avoid implementations assuming that it will never change (as was often encountered in TLS implementations, leading to significant problems).

The network and the user

Beyond the desire to avoid ossification, these changes also reflect the evolving relationship between networks and their users. While for a long time people assumed that networks were always benevolent — or at least disinterested — parties, this is no longer the case, thanks not only to pervasive monitoring but also attacks like Firesheep.

As a result, there is growing tension between the needs of Internet users overall and those of the networks who want to have access to some amount of the data flowing over them. Particularly affected will be networks that want to impose policy upon those users; for example, enterprise networks.

In some cases, they might be able to meet their goals by installing software (or a CA certificate, or a browser extension) on their users’ machines. However, this isn’t as easy in cases where the network doesn’t own or have access to the computer; for example, BYOD has become common, and IoT devices seldom have the appropriate control interfaces.

As a result, a lot of discussion surrounding protocol development in the IETF is touching on the sometimes competing needs of enterprises and other ‘leaf’ networks and the good of the Internet overall.

Get involved

For the Internet to work well in the long run, it needs to provide value to end users, avoid ossification, and allow networks to operate. The changes taking place now need to meet all three goals, but we need more input from network operators.

If these changes affect your network — or won’t— please leave comments below, or better yet, get involved in the IETF by attending a meeting, joining a mailing list, or providing feedback on a draft.

Thanks to Martin Thomson and Brian Trammell for their review.

Mark Nottingham is a member of the Internet Architecture Board and co-chairs the IETF’s HTTP and QUIC Working Groups.


The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

Read the whole story
jimwise
29 days ago
reply
Good summary of in-flight protocol changes. A lot on the horizon.
MotherHydra
28 days ago
A brave, new world.
kimmo
28 days ago
reply
Espoo, Finland
Share this story
Delete

The True Meaning of Christmas

2 Comments and 10 Shares
They all made fun of Autometalogolex, but someday there will be a problem with Christmas that can only be solved if Santa somehow gets a serious headache, and then they'll see.
Read the whole story
popular
28 days ago
reply
kimmo
28 days ago
reply
Espoo, Finland
Share this story
Delete
2 public comments
rclatterbuck
18 days ago
reply
Now that is a neologism I can get behind.
alt_text_bot
28 days ago
reply
They all made fun of Autometalogolex, but someday there will be a problem with Christmas that can only be solved if Santa somehow gets a serious headache, and then they'll see.

The New Christmas

3 Shares

The New Christmas

Merry Christmas and Happy Holidays, everybody! Thanks for an awesome year!

I’m taking my annual cartooning break, except this time I’ll be on a slightly longer break because I’m heading across the country to visit my family in Ontario. The chickens will be back on January 8!

In the meantime, you can follow my adventures on Instagram, read classic chickens on GoComics, and watch for my Best of 2017 post on Dec. 31st.

Read the whole story
kimmo
28 days ago
reply
Espoo, Finland
Share this story
Delete

Reviving the 1973 Unix Programmer's Manual

1 Share
The 1973 Fourth Edition of the Unix Programmer's Manual doesn't seem to be available online in typeset form. This is how I managed to recreate it from its source code.
Read the whole story
kimmo
61 days ago
reply
Espoo, Finland
Share this story
Delete

Burn The Programmer!

2 Shares


This is a guest post by Virtual Reality developer Hugh Hancock, creator of VR horror RPG Left-Hand Path.

I've always had a problem with Arthur C Clarke's Third Law, "Any sufficiently advanced technology is indistinguishable from magic.".

This may have something to do with my career for a long time involving both magic and technology. Magic's a perennial fiction obsession of mine, and my media of choice have always been highly technological.

Most recently, I just released Left-Hand Path. It's a Virtual Reality game for the Oculus Rift and HTC Vive - obviously fairly technological - whose central conceit is that in it, you learn the skills to cast spells. And I don't just mean you select spells from a spellbook and then press a button: I mean you have to learn the gestures necessary to create the magic, and on occasion go through a complex system of ritual magic to create the effects you desire, flipping through your grimoire to remember exactly how you summon your ancient powers.


Now, all that makes for a great game. There's a sense of accomplishment as you learn to use the powers of magic to your advantage and remember how to cast the "Vis" spell as something nasty is closing on you. There's a sense of discovery as you learn more about the world, the way magic works, and find powerful new spells. And there's a sense of pant-crapping terror as you realise that the things your new ritual summons to eat your foes will cheerfully eat you as well.

(Fun fact: horror games are more intense in VR, by some margin. So terrifying, in fact, that I added a "Low Terror Mode" recently, after reading a significant number of people saying "I'd love to play your game, but I absolutely won't, because it sounds way too scary.")

Now, none of that description of magic sounds very much like the technology I use in 2017.

I don't have to imprecate dark and terrible forces in order to use my PS4, unless you count Sony's latest privacy policy. My lovely new iPad is famously intuitive, not a quality one would ascribe to The Lesser Key Of Solomon.

But.

And this is a big but. (I cannot lie.)

None of what I describe sounds like the consumer tech that I use. That's not so much the case for the other technology I interact with.

And I think that distinction - and the points where Clarke's Third Law does still apply - may explain a lot about why technologists are increasingly becoming hated in many circles.

Speak friend().init and enter

Magic is arcane - in the original meaning of the world. It's occult - again, in the original meaning of the world. It's difficult, dangerous, and often quite impractical despite its theoretical incredible power.

...Ever tried to set up a Sendmail server?

The technology that we deal with as technologists absolutely obeys Clarke's Third Law. Indeed, I've often wondered quite how much Charlie's Laundry Files magic was inspired by the fact he had a career before "Novelist" writing PERL. I've occasionally wondered if inscribing a pentagram and blood sacrifice would be more effective in ranking a site on Google than the traditional approaches. I've made myself physically ill whilst creating other worlds in the first generation of VR.

Sounds like magic to me. Indeed, I've read multiple books where the wizard protagonist suffers a severe "magic hangover" after overextending his powers, and it sounds a lot like what I experienced after finally getting Minecraft to work on my Oculus DK1.

(Side note: on quality VR platforms, those being Oculus and Vive, the vomiting thing is mostly solved by now. Don't fear the Great God Huey if you're thinking of trying those.)

I mean, does this look like some magical incantation stuff to you?

(?:(?:\r\n)?[ \t])(?:(?:(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t] )+|\Z|(?=[["()<>@,;:\".[]]))|"(?:[^\"\r\]|\.|(?:(?:\r\n)?[ \t]))"(?:(?: \r\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\".[] \000-\031]+(?:(?:( ?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|"(?:[^\"\r\]|\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t])))@(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\".[] \000-\0 31]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)*\ ](?:(?:\r\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\".[] \000-\031]+ (?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)](?: (?:\r\n)?[ \t])))|(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z |(?=[["()<>@,;:\".[]]))|"(?:[^\"\r\]|\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)

Ask the non-technologist in your life. Or just carve it on a stone tablet and leave it somewhere around Skara Brae for archaeologists to get excited about.

Klaatu barada MongoDB

So there's this class of people in the world who can do incredible things - like, say, teaching a car to drive itself. Or indeed crafting a literal Magician's Broom to clean their towers - I mean, apartments.

And they do this by immersing themselves in obscure, difficult learning that on the face of it makes no sense to the average person.

They don't need large teams of people or masses of wealth to do these things. In fact, if one of them locks themselves up in their tower, they're likely to come out in 10 years having created an entire world for themselves as a plaything.

They can cause harm to people tens of thousands of miles away using weirdly-named incantations - like "WannaCry".

They summon and control alien entities called "AIs". They don't always perfectly control those entities.

And they can amass unimaginable wealth and power by using these arcane skills.

What happens next?

Well, it's fairly clear from a cursory read over fantasy literature - and it's fresh in my mind as humans' reactions to magic users are also a key plot-point of Left-Hand Path.

They're either going to get worshipped as gods - or they get burned as a witch.

Thou Shalt Not Suffer A Programmer To Live

Obviously there are plenty of other reasons why society at large might be getting a bit skeptical of the tech giants, Silicon Valley, and so on. There's the wealth disparity. The diversity culture. The threat of strong AI. And more.

But I can't help but feel, looking at a lot of the media pushback at the moment, that a lot of it is straight-up fantasy novel 101 "Reactions To Wizardry".

And it's particularly ironic because most of the people reacting are surrounded by the same wizardry. They've got daemons in their phones. They're organising using services that have been carefully massaged to not require "magic" to use. Their cars and their TVs and their fridges all contain little bits of complex, arcane magic that can only be understood by the "wizards".

Don't get confused with the real-world "witch hunts" here. Those witches didn't (probably) actually have magical powers. This is something else.

And as I watch 2017 unfold in all its craziness, I do start wondering whether the conversation should be less about robots, and more about straight-up magic. About a world which is increasingly splitting into those who can wield magic, those who can pay the magicians, and those who just use the things magic enables.

Because that's the interesting part: whilst Arthur C Clarke's maxim was true, and all advanced technology was arcane and difficult, these problems didn't occur. It's only now, as technology finally surpasses magic enough to eat society as a whole, rather than just the beardy guys in the towers studying eldrich tomes, that society as a whole notices the wizards in its midst.

Now, if you'll excuse me, I've just released an army of scuttling, gliding, limbless, bodiless eldrich horrors on the general population, and I've got to go see how they're reacting.

What do you think? Are programmers in danger of burning?

Read the whole story
kimmo
70 days ago
reply
Espoo, Finland
Share this story
Delete

Logical

3 Comments and 16 Shares
It's like I've always said--people just need more common sense. But not the kind of common sense that lets them figure out that they're being condescended to by someone who thinks they're stupid, because then I'll be in trouble.
Read the whole story
kimmo
99 days ago
reply
Espoo, Finland
Share this story
Delete
2 public comments
jepler
100 days ago
reply
it me
Earth, Sol system, Western spiral arm
Covarr
100 days ago
reply
Feelings may not be able to achieve much, but they provide purpose. #VaguelyPhilosophical
Moses Lake, WA
Next Page of Stories