HTTP2 en pratique

Introduction à HTTP/2
Nous connaissons maintenant tous les avantages du protocole HTTP/2 (h2/h2c).
En résumé HTTP/2 c’est :
- HPACK : Compression des Header
- Server Push : preload
- Server Hints : prefetch
- Transite des données au format binaire
- Une seule connexion TCP pour faire passer plusieurs requêtes
Mais alors, pourquoi rédiger cet article ?
Dans mon cadre profesionnel, un client m’a fait part de son impossibilité à faire fonctionner la fonctionnalité PUSH sur son infrastructure hébérgée sur AWS. J’ai donc souhaité approfondir le sujet et de vous partager mon analyse.
La feature PUSH sur le banc d’essai
Définition
Globalement et sans rentrer dans des détails techniques sur le push HTTP/2, cela permet au client (navigateur web) de recevoir directement sans le demander (comprendre sans réaliser une requête http) du contenu comme des CSS ou bien des JS.
On économise donc pas mal d’allers/retours et le rendu de la page s’effectue en théorie plus rapidement. Ci-dessous une figure simple expliquant le fonctionnement :
Architectures de test
Nos deux architectures de test sont composées de :
- Un ALB publique
- n serveurs Varnish (version 5.2)
- Un ALB interne
- n frontaux Nginx
La différence entre ces deux architectures est la présence ou non d’https sur l’ALB interne et les backend Nginx.
Aussi, j’ai bootstrapé une app simple en Symfony 4 et suivi ce tuto pour implémenter le push : https://dunglas.fr/2017/10/symfony-4-http2-push-and-preloading/
Ci-dessous le code HTML de test qui doit me permettre d’exploiter la feature PUSH :
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Welcome!</title>
<link rel="stylesheet" href="{{ preload(asset('main.css'), { as: 'style' }) }}">
</head>
<body>
<main role="main" class="container">
<h1>Hello World</h1>
</main>
</body>
</html>
Premier test en passant par l’ALB publique
Architeture A et B
nghttp -ans https://symfony-sample.xxx.net/
id responseEnd requestStart process code size request path
13 +63.90ms +76us 63.82ms 200 8K /
15 +69.81ms +63.91ms 5.90ms 200 91 /main.css
Comme on peut le constater, l’asset main.css n’est pas push sinon il y aurait une petite ‘*’ qui indiquerait le push. Du coup, je procède dans l’autre sens, je vais faire mes tests de la source (Nginx) en remontant, petit à petit pour comprendre d’où vient le problème.
Second test en passant par Nginx
- Archi A
- La commande nghttp ne fonctionne tout simplement pas. Aucun retour. Pourquoi ? Tout simplement parce que Nginx ne gère pas HTTP/2 over TCP (h2c) mais uniquement via SSL (h2)
- Résultat : KO
- Archi B
root@ip-10-134-168-146 ~]# nghttp -ans https://127.0.0.1:32771/ -v [60/1595]
[WARNING]: -a, --get-assets option is ignored because
the binary was not compiled with libxml2.
[ 0.000] Connected
The negotiated protocol: h2
[ 0.003] recv SETTINGS frame <length=18, flags=0x00, stream_id=0>
(niv=3)
[SETTINGS_MAX_CONCURRENT_STREAMS(0x03):128]
[SETTINGS_INITIAL_WINDOW_SIZE(0x04):65536]
[SETTINGS_MAX_FRAME_SIZE(0x05):16777215]
[ 0.003] recv WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
(window_size_increment=2147418112)
[ 0.003] send SETTINGS frame <length=12, flags=0x00, stream_id=0>
(niv=2)
[SETTINGS_MAX_CONCURRENT_STREAMS(0x03):100]
[SETTINGS_INITIAL_WINDOW_SIZE(0x04):65535]
[ 0.003] send SETTINGS frame <length=0, flags=0x01, stream_id=0>
; ACK
(niv=0)
[ 0.003] send PRIORITY frame <length=5, flags=0x00, stream_id=3>
(dep_stream_id=0, weight=201, exclusive=0)
[ 0.003] send PRIORITY frame <length=5, flags=0x00, stream_id=5>
(dep_stream_id=0, weight=101, exclusive=0)
[ 0.003] send PRIORITY frame <length=5, flags=0x00, stream_id=7>
(dep_stream_id=0, weight=1, exclusive=0)
[ 0.003] send PRIORITY frame <length=5, flags=0x00, stream_id=9>
(dep_stream_id=7, weight=1, exclusive=0)
[ 0.003] send PRIORITY frame <length=5, flags=0x00, stream_id=11>
(dep_stream_id=3, weight=1, exclusive=0)
[ 0.003] send HEADERS frame <length=39, flags=0x25, stream_id=13>
; END_STREAM | END_HEADERS | PRIORITY
(padlen=0, dep_stream_id=11, weight=16, exclusive=0)
; Open new stream
:method: GET
:path: /
:scheme: https
:authority: 127.0.0.1:32771
accept: */*
accept-encoding: gzip, deflate
user-agent: nghttp2/1.21.1
[ 0.009] recv SETTINGS frame <length=0, flags=0x01, stream_id=0> [20/1595]
; ACK
(niv=0)
[ 0.026] recv (stream_id=13) :method: GET
[ 0.026] recv (stream_id=13) :path: /main.css
[ 0.026] recv (stream_id=13) :scheme: https
[ 0.026] recv (stream_id=13) :authority: 127.0.0.1:32771
[ 0.026] recv (stream_id=13) accept-encoding: gzip, deflate
[ 0.026] recv (stream_id=13) user-agent: nghttp2/1.21.1
[ 0.026] recv PUSH_PROMISE frame <length=52, flags=0x04, stream_id=13>
; END_HEADERS
(padlen=0, promised_stream_id=2)
[ 0.026] recv (stream_id=13) :status: 200
[ 0.026] recv (stream_id=13) server: nginx
[ 0.026] recv (stream_id=13) content-type: text/html; charset=UTF-8
[ 0.026] recv (stream_id=13) vary: Accept-Encoding
[ 0.026] recv (stream_id=13) cache-control: no-cache, private
[ 0.026] recv (stream_id=13) date: Wed, 28 Nov 2018 12:23:09 GMT
[ 0.026] recv (stream_id=13) link: </main.css>; rel="preload"; as="style"
[ 0.026] recv (stream_id=13) x-debug-token: e105ad
[ 0.026] recv (stream_id=13) x-debug-token-link: https://127.0.0.1:32771/_profiler/e105ad
[ 0.026] recv (stream_id=13) content-encoding: gzip
[ 0.026] recv HEADERS frame <length=210, flags=0x04, stream_id=13>
; END_HEADERS
(padlen=0)
; First response header
[ 0.026] recv DATA frame <length=8192, flags=0x00, stream_id=13>
[ 0.026] recv DATA frame <length=900, flags=0x01, stream_id=13>
; END_STREAM
[ 0.029] recv (stream_id=2) :status: 200
[ 0.029] recv (stream_id=2) server: nginx
[ 0.029] recv (stream_id=2) date: Wed, 28 Nov 2018 12:23:09 GMT
[ 0.029] recv (stream_id=2) content-type: text/css
[ 0.029] recv (stream_id=2) last-modified: Wed, 28 Nov 2018 09:16:35 GMT
[ 0.029] recv (stream_id=2) vary: Accept-Encoding
[ 0.029] recv (stream_id=2) etag: W/"5bfe5cf3-4f"
[ 0.029] recv (stream_id=2) content-encoding: gzip
[ 0.029] recv HEADERS frame <length=112, flags=0x04, stream_id=2>
; END_HEADERS
(padlen=0)
; First push response header
[ 0.029] recv DATA frame <length=91, flags=0x01, stream_id=2>
; END_STREAM
[ 0.029] send GOAWAY frame <length=8, flags=0x00, stream_id=0>
(last_stream_id=2, error_code=NO_ERROR(0x00), opaque_data(0)=[])
***** Statistics *****
Request timing:
responseEnd: the time when last byte of response was received
relative to connectEnd
requestStart: the time just before first byte of request was sent
relative to connectEnd. If '*' is shown, this was
pushed by server.
process: responseEnd - requestStart
code: HTTP status code
size: number of bytes received as response body without
inflation.
URI: request URI
see http://www.w3.org/TR/resource-timing/#processing-model
sorted by 'complete'
id responseEnd requestStart process code size request path
13 +23.49ms +727us 22.76ms 200 8K /
2 +26.76ms * +23.13ms 3.63ms 200 91 /main.css
- Ici pas de souci, on voit bien que le main.css est bien PUSH
- Résultat : OK
Troisième test en passant par l’ALB interne
- Archi A
- Idem que pour le Nginx sur le port 80, pas d’HTTP/2
- Résultat : KO
- Archi B
nghttp -ans -H "Host: symfony-sample.xxx.net" https://internal-xxx-web-alb-21283442.eu-west-3.elb.amazonaws.com/ -v [56/574]
[ 0.004] Connected
The negotiated protocol: h2
[ 0.007] recv SETTINGS frame <length=18, flags=0x00, stream_id=0>
(niv=3)
[SETTINGS_MAX_CONCURRENT_STREAMS(0x03):128]
[SETTINGS_INITIAL_WINDOW_SIZE(0x04):65536]
[SETTINGS_MAX_FRAME_SIZE(0x05):16777215]
[ 0.007] recv WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
(window_size_increment=2147418112)
[ 0.007] send SETTINGS frame <length=12, flags=0x00, stream_id=0>
(niv=2)
[SETTINGS_MAX_CONCURRENT_STREAMS(0x03):100]
[SETTINGS_INITIAL_WINDOW_SIZE(0x04):65535]
[ 0.007] send SETTINGS frame <length=0, flags=0x01, stream_id=0>
; ACK
(niv=0)
[ 0.007] send PRIORITY frame <length=5, flags=0x00, stream_id=3>
(dep_stream_id=0, weight=201, exclusive=0)
[ 0.007] send PRIORITY frame <length=5, flags=0x00, stream_id=5>
(dep_stream_id=0, weight=101, exclusive=0)
[ 0.007] send PRIORITY frame <length=5, flags=0x00, stream_id=7>
(dep_stream_id=0, weight=1, exclusive=0)
[ 0.007] send PRIORITY frame <length=5, flags=0x00, stream_id=9>
(dep_stream_id=7, weight=1, exclusive=0)
[ 0.007] send PRIORITY frame <length=5, flags=0x00, stream_id=11>
(dep_stream_id=3, weight=1, exclusive=0)
[ 0.007] send HEADERS frame <length=107, flags=0x25, stream_id=13>
; END_STREAM | END_HEADERS | PRIORITY
(padlen=0, dep_stream_id=11, weight=16, exclusive=0)
; Open new stream
:method: GET
:path: /
:scheme: https
:authority: internal-xxx-web-alb-21283442.eu-west-3.elb.amazonaws.com
accept: */*
accept-encoding: gzip, deflate
user-agent: nghttp2/1.18.1
host: symfony-sample.xxx.net
[ 0.007] recv SETTINGS frame <length=0, flags=0x01, stream_id=0> [17/574]
; ACK
(niv=0)
[ 0.034] recv (stream_id=13) :status: 200
[ 0.034] recv (stream_id=13) date: Wed, 28 Nov 2018 12:25:49 GMT
[ 0.034] recv (stream_id=13) content-type: text/html; charset=UTF-8
[ 0.034] recv (stream_id=13) server: nginx
[ 0.034] recv (stream_id=13) vary: Accept-Encoding
[ 0.034] recv (stream_id=13) cache-control: no-cache, private
[ 0.034] recv (stream_id=13) link: </main.css>; rel="preload"; as="style"
[ 0.034] recv (stream_id=13) x-debug-token: 3616a2
[ 0.034] recv (stream_id=13) x-debug-token-link: https://internal-xxx-web-alb-21283442.eu-west-3.elb.amazonaws.com/_profiler/3616a2
[ 0.034] recv (stream_id=13) content-encoding: gzip
[ 0.034] recv HEADERS frame <length=260, flags=0x04, stream_id=13>
; END_HEADERS
(padlen=0)
; First response header
[ 0.035] recv DATA frame <length=8192, flags=0x00, stream_id=13>
[ 0.035] recv DATA frame <length=956, flags=0x00, stream_id=13>
[ 0.035] recv DATA frame <length=0, flags=0x01, stream_id=13>
; END_STREAM
[ 0.035] send HEADERS frame <length=21, flags=0x25, stream_id=15>
; END_STREAM | END_HEADERS | PRIORITY
(padlen=0, dep_stream_id=3, weight=32, exclusive=0)
; Open new stream
:method: GET
:path: /main.css
:scheme: https
:authority: internal-xxx-web-alb-21283442.eu-west-3.elb.amazonaws.com
accept: */*
accept-encoding: gzip, deflate
user-agent: nghttp2/1.18.1
host: symfony-sample.xxx.net
[ 0.039] recv (stream_id=15) :status: 200
[ 0.039] recv (stream_id=15) date: Wed, 28 Nov 2018 12:25:49 GMT
[ 0.039] recv (stream_id=15) content-type: text/css
[ 0.039] recv (stream_id=15) server: nginx
[ 0.039] recv (stream_id=15) last-modified: Wed, 28 Nov 2018 09:16:35 GMT
[ 0.039] recv (stream_id=15) vary: Accept-Encoding
[ 0.039] recv (stream_id=15) etag: W/"5bfe5cf3-4f"
[ 0.039] recv (stream_id=15) content-encoding: gzip
[ 0.039] recv HEADERS frame <length=133, flags=0x04, stream_id=15>
; END_HEADERS
(padlen=0)
; First response header
[ 0.039] recv DATA frame <length=91, flags=0x00, stream_id=15>
[ 0.039] recv DATA frame <length=0, flags=0x01, stream_id=15>
; END_STREAM
[ 0.039] send GOAWAY frame <length=8, flags=0x00, stream_id=0>
(last_stream_id=0, error_code=NO_ERROR(0x00), opaque_data(0)=[])
***** Statistics *****
Request timing:
responseEnd: the time when last byte of response was received
relative to connectEnd
requestStart: the time just before first byte of request was sent
relative to connectEnd. If '*' is shown, this was
pushed by server.
process: responseEnd - requestStart
code: HTTP status code
size: number of bytes received as response body without
inflation.
URI: request URI
see http://www.w3.org/TR/resource-timing/#processing-model
sorted by 'complete'
id responseEnd requestStart process code size request path
13 +28.56ms +179us 28.38ms 200 8K /
15 +32.55ms +28.58ms 3.97ms 200 91 /main.css
- HTTP/2 fonctionnel mais pas de PUSH
- Résultat : KO</span
Quatrième test en passant par Varnish
- Archi A et B
- La commande nghttp ne renvoi rien
- Je test alors avec curl :
curl -k -svo /dev/null --http2 -H "Host: symfony-sample.xxx.net" http://127.0.0.1/httpprobe
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET /httpprobe HTTP/1.1
> Host: -symfony-sample.xxx.net
> User-Agent: curl/7.52.1
> Accept: */*
> Connection: Upgrade, HTTP2-Settings
> Upgrade: h2c
> HTTP2-Settings: AAMAAABkAARAAAAA
>
< HTTP/1.1 200 OK
< Date: Wed, 28 Nov 2018 13:02:46 GMT
< Server: Varnish
< X-Varnish: 40565
< Content-Length: 1000
< Accept-Ranges: bytes
< Connection: keep-alive
<
{ [1000 bytes data]
* Curl_http_done: called premature == 0
* Connection #0 to host 127.0.0.1 left intact
- Toujours rien ? Pas d’http/2 ? Pourtant Varnish me dit que depuis le version 5.0, h2c est implémenté. Allons faire un petit tour dans la RFC pour tenter de comprendre : https://tools.ietf.org/html/rfc7540#section-3.2
3.2. Starting HTTP/2 for "http" URIs
A client that makes a request for an "http" URI without prior
knowledge about support for HTTP/2 on the next hop uses the HTTP
Upgrade mechanism (Section 6.7 of [RFC7230]). The client does so by
making an HTTP/1.1 request that includes an Upgrade header field with
the "h2c" token. Such an HTTP/1.1 request MUST include exactly one
HTTP2-Settings (Section 3.2.1) header field.
For example:
GET / HTTP/1.1
Host: server.example.com
Connection: Upgrade, HTTP2-Settings
Upgrade: h2c
HTTP2-Settings: <base64url encoding of HTTP/2 SETTINGS payload>
Requests that contain a payload body MUST be sent in their entirety
before the client can send HTTP/2 frames. This means that a large
request can block the use of the connection until it is completely
sent.
If concurrency of an initial request with subsequent requests is
important, an OPTIONS request can be used to perform the upgrade to
HTTP/2, at the cost of an additional round trip.
A server that does not support HTTP/2 can respond to the request as
though the Upgrade header field were absent:
HTTP/1.1 200 OK
Content-Length: 243
Content-Type: text/html
...
A server MUST ignore an "h2" token in an Upgrade header field.
Presence of a token with "h2" implies HTTP/2 over TLS, which is
instead negotiated as described in Section 3.3.
A server that supports HTTP/2 accepts the upgrade with a 101
(Switching Protocols) response. After the empty line that terminates
the 101 response, the server can begin sending HTTP/2 frames. These
frames MUST include a response to the request that initiated the
upgrade.
For example:
HTTP/1.1 101 Switching Protocols
Connection: Upgrade
Upgrade: h2c
[ HTTP/2 connection ...
The first HTTP/2 frame sent by the server MUST be a server connection
preface (Section 3.5) consisting of a SETTINGS frame (Section 6.5).
Upon receiving the 101 response, the client MUST send a connection
preface (Section 3.5), which includes a SETTINGS frame.
The HTTP/1.1 request that is sent prior to upgrade is assigned a
stream identifier of 1 (see Section 5.1.1) with default priority
values (Section 5.3.5). Stream 1 is implicitly "half-closed" from
the client toward the server (see Section 5.1), since the request is
completed as an HTTP/1.1 request. After commencing the HTTP/2
connection, stream 1 is used for the response.
Pour résumer cette note de la RFC, Varnish est censé nous renvoyer un code “101 Switching Protocols” pour passer en http2. Mais pourquoi donc il ne le fait pas ?! Après quelques recherches, je tombe sur cette commande magique :
varnishadm param.show feature
feature
Value is: none (default)
Enable/Disable various minor features.
none Disable all features.
Use +/- prefix to enable/disable individual feature:
short_panic Short panic message.
wait_silo Wait for persistent silo.
no_coredump No coredumps.
esi_ignore_https Treat HTTPS as HTTP in
ESI:includes
esi_disable_xml_check Don't check of body looks like
XML
esi_ignore_other_elements Ignore non-esi XML-elements
esi_remove_bom Remove UTF-8 BOM
https_scheme Also split https URIs
http2 Support HTTP/2 protocol
Donc http2 n’est pas activé par défaut dans Varnish ! Aie Aie aie …
Bon une fois activé ça donne ça :
varnishadm param.show feature
feature
Value is: +http2
Default is: none
....
On reteste notre curl :
curl -k -svo /dev/null --http2 -H "Host: symfony-sample.xxx.net" http://127.0.0.1/httpprobe
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET /httpprobe HTTP/1.1
> Host: symfony-sample.xxx.net
> User-Agent: curl/7.52.1
> Accept: */*
> Connection: Upgrade, HTTP2-Settings
> Upgrade: h2c
> HTTP2-Settings: AAMAAABkAARAAAAA
>
< HTTP/1.1 101 Switching Protocols
< Connection: Upgrade
< Upgrade: h2c
* Received 101
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< date: Wed, 28 Nov 2018 13:04:26 GMT
< server: Varnish
< x-varnish: 12
< content-length: 1000
< accept-ranges: bytes
<
{ [1000 bytes data]
* Curl_http_done: called premature == 0
* Connection #0 to host 127.0.0.1 left intact
- Yes yes yes ! On a notre Switching Protocols !
- Bon par contre toujours pas de PUSH …
- Résultat : KO
Résultat final
Malgré avoir activé http/2 de bout en bout, impossible de faire fonctionner PUSH …
Cela s’explique tout simplement (je pense) par le fait que les reverses proxy ne gèrent pas HTTP/2 pour les connexions aux backends.
L’apport d’HTTP/2 est bien réel pour le client mais quasi nul pour les connexions entre nos différents reverses proxy. Dommage tout de même car à priori, cela limite l’utilisation de certaines features dont le PUSH…
Avec ce type d’infrastructure, il nous sera impossible de bénéficier de l’ensemble des possibilités proposées par le protocol.
Ci-dessous un petit résumé par middleware testés :
- Varnish
- Support d’http/2 mais pas par défaut. Il faut donc bien penser à l’activer.
- Nginx
-
Nginx ne supporte que HTTP2 over SSL (h2) et uniquement côté client
-
HTTP/2 n’est donc pas supporté avec un proxy_pass. Nginx se justifie par le fait qu’avec des réseaux à faibles latences et keepalive activé, http2 n’est pas utile :
At the moment, we only support HTTP/2 on the client side. You can’t configure HTTP/2 with proxy_pass. But what is the point of HTTP/2 on the backend side? Because as you can see from the benchmarks, there’s not much benefit in HTTP/2 for low‑latency networks such as upstream connections. Also, in NGINX you have the keepalive module, and you can configure a keepalive cache. The main performance benefit of HTTP/2 is to eliminate additional handshakes, but if you do that already with a keepalive cache, you don’t need HTTP/2 on the upstream side.
-
- AWS ALB
-
Idem que Nginx, la connexion vers les backends passent en HTTP/1 et HTTP/2 n’est supporté qu’uniquement côté client avec un listenner HTTPS : https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-configuration
-
De plus, impossible d’utiliser la fonctionnalité PUSH :
Application Load Balancers provide native support for HTTP/2 with HTTPS listeners. You can send up to 128 requests in parallel using one HTTP/2 connection. The load balancer converts these to individual HTTP/1.1 requests and distributes them across the healthy targets in the target group. Because HTTP/2 uses front-end connections more efficiently, you might notice fewer connections between clients and the load balancer. You can't use the server-push feature of HTTP/2.
-