HTTP & HTTPS Protocols
HTTP/1.0 (1996)
Section titled “HTTP/1.0 (1996)”The first widely adopted version of HTTP.
Key Characteristics:
- New TCP connection per request: Every request requires a new TCP handshake (expensive!)
- No persistent connections: Connection closes after each request/response
- Simple text-based protocol: Headers and body in plain text
- No compression: Headers sent uncompressed every time
Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle HTTP/1.0 - Multiple Connectionsskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #f8fafcskinparam noteBorderColor #94a3b8hide footbox
participant Browserparticipant Server
== Request 1: HTML ==Browser -> Server: TCP Handshake (SYN, SYN-ACK, ACK)Browser -> Server: GET /index.html HTTP/1.0Server --> Browser: 200 OK\n<html>...</html>Browser -> Server: TCP Close (FIN)note right Connection closed after each requestend note
== Request 2: CSS ==Browser -> Server: TCP Handshake (SYN, SYN-ACK, ACK)Browser -> Server: GET /style.css HTTP/1.0Server --> Browser: 200 OK\nbody { ... }Browser -> Server: TCP Close (FIN)
== Request 3: Image ==Browser -> Server: TCP Handshake (SYN, SYN-ACK, ACK)Browser -> Server: GET /logo.png HTTP/1.0Server --> Browser: 200 OK\n[binary image]Browser -> Server: TCP Close (FIN)
legend right Problems: - 3 resources = 3 TCP handshakes - Slow (latency for each handshake) - Server overheadendlegend@endumlProblems:
- High Latency: Each TCP handshake adds ~100ms round-trip time
- Server Resource Waste: Opening/closing connections constantly
- Poor Performance: Loading a page with 50 resources = 50 connections
HTTP/1.1 (1997)
Section titled “HTTP/1.1 (1997)”Major improvements to address HTTP/1.0 inefficiencies.
Key Features:
1. Persistent Connections (Keep-Alive)
Section titled “1. Persistent Connections (Keep-Alive)”Reuse the same TCP connection for multiple requests.
Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle HTTP/1.1 - Persistent Connectionskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #f8fafcskinparam noteBorderColor #94a3b8hide footbox
participant Browserparticipant Server
Browser -> Server: TCP Handshake (once)note right Connection stays open for multiple requestsend note
Browser -> Server: GET /index.html HTTP/1.1\nConnection: keep-aliveServer --> Browser: 200 OK\n<html>...</html>
Browser -> Server: GET /style.css HTTP/1.1\nConnection: keep-aliveServer --> Browser: 200 OK\nbody { ... }
Browser -> Server: GET /logo.png HTTP/1.1\nConnection: keep-aliveServer --> Browser: 200 OK\n[binary image]
note right Same connection Much faster!end note
Browser -> Server: Connection: closeServer -> Browser: TCP Close
legend right Improvement: - 3 resources = 1 TCP handshake - Much faster - Less server overheadendlegend@enduml2. Pipelining
Section titled “2. Pipelining”Send multiple requests without waiting for responses (but still has issues).
Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle HTTP/1.1 Pipelining (Theoretical)skinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #fee2e2skinparam noteBorderColor #ef4444hide footbox
participant Browserparticipant Server
Browser -> Server: GET /index.htmlBrowser -> Server: GET /style.cssBrowser -> Server: GET /script.jsnote right Send all requests without waitingend note
Server --> Browser: Response 1 (HTML)Server --> Browser: Response 2 (CSS)Server --> Browser: Response 3 (JS)note left Responses must come in ORDER (FIFO)end note
note bottom ❌ Head-of-Line Blocking Problem: If response 1 is slow, response 2 and 3 must wait! Rarely used in practice because of this.end note@enduml3. Host Header (Virtual Hosting)
Section titled “3. Host Header (Virtual Hosting)”Multiple domains on one IP address.
GET /index.html HTTP/1.1Host: example.com ← Required header
GET /about.html HTTP/1.1Host: another.com ← Same server, different siteHTTP/1.1 Limitations:
Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle HTTP/1.1 Head-of-Line Blockingskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #fee2e2skinparam noteBorderColor #ef4444hide footbox
participant Browserparticipant Server
Browser -> Server: Request 1 (fast resource)Browser -> Server: Request 2 (slow resource - 5s)Browser -> Server: Request 3 (fast resource)
note right of Server Processing...end note
Server --> Browser: Response 1 ✅note right Request 2 is slow! Blocking everything...end note
... 5 seconds ...
Server --> Browser: Response 2 ⏳Server --> Browser: Response 3 ✅
note bottom Problem: Request 3 (fast) waited 5 seconds because Request 2 (slow) blocked the pipeline. This is Head-of-Line (HOL) Blocking.end note@endumlWorkarounds Browsers Use:
- Open 6-8 parallel TCP connections per domain
- Still wasteful and limited
HTTP/2 (2015)
Section titled “HTTP/2 (2015)”Revolutionary changes to solve HTTP/1.1 problems.
Key Features:
1. Binary Protocol (Not Text)
Section titled “1. Binary Protocol (Not Text)”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle HTTP/1.1 vs HTTP/2 Formatleft to right directionskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam rectangleBorderColor #94a3b8skinparam rectangleBackgroundColor #f8fafc
package "HTTP/1.1 (Text)" { rectangle "GET /api/users HTTP/1.1\nHost: example.com\nUser-Agent: Chrome\nContent-Type: application/json\n\n{\"id\": 123}" as HTTP1 #fee2e2 note bottom of HTTP1 Human-readable Larger size Slower parsing end note}
package "HTTP/2 (Binary Frames)" { rectangle "Frame Type: HEADERS\nStream ID: 1\n[binary data]\n\nFrame Type: DATA\nStream ID: 1\n[binary data]" as HTTP2 #dcfce7 note bottom of HTTP2 Binary format Smaller size Faster parsing end note}
@enduml2. Multiplexing (No More Head-of-Line Blocking!)
Section titled “2. Multiplexing (No More Head-of-Line Blocking!)”Multiple requests/responses over single TCP connection without blocking.
Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle HTTP/2 Multiplexingskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #dcfce7skinparam noteBorderColor #10b981hide footbox
participant Browserparticipant "Single TCP\nConnection" as TCPparticipant Server
Browser -> TCP: Request 1 (Stream 1) - fastBrowser -> TCP: Request 2 (Stream 2) - slowBrowser -> TCP: Request 3 (Stream 3) - fastnote right All sent simultaneously on SAME connection Each has unique Stream IDend note
TCP -> Server: Forward all streams
Server --> TCP: Response 1 (Stream 1) ✅note left Responses can come back in ANY order! No blocking!end note
Server --> TCP: Response 3 (Stream 3) ✅note left Stream 3 doesn't wait for slow Stream 2end note
... Stream 2 processing ...
Server --> TCP: Response 2 (Stream 2) ⏳
TCP --> Browser: All responses received
legend right ✅ No Head-of-Line Blocking ✅ Single TCP connection ✅ Parallel requests ✅ Responses in any orderendlegend@enduml3. Header Compression (HPACK)
Section titled “3. Header Compression (HPACK)”Headers are compressed using HPACK algorithm.
HTTP/1.1 (Repeated Headers - Wasteful)─────────────────────────────────────────Request 1: User-Agent: Mozilla/5.0 ... (200 bytes) Authorization: Bearer eyJhbG... (300 bytes) Cookie: session=abc123... (100 bytes)
Request 2: User-Agent: Mozilla/5.0 ... (200 bytes) ← Duplicate! Authorization: Bearer eyJhbG... (300 bytes) ← Duplicate! Cookie: session=abc123... (100 bytes) ← Duplicate!
Total: 1200 bytes for 2 requests
HTTP/2 (HPACK Compression)─────────────────────────────────────────Request 1: :method: GET :path: /users User-Agent: Mozilla/5.0 ... (200 bytes) Authorization: Bearer eyJhbG... (300 bytes) [Stored in compression table with index]
Request 2: :method: GET :path: /posts User-Agent: [Reference: Index 62] (2 bytes) ← Compressed! Authorization: [Reference: Index 63] (2 bytes) ← Compressed!
Total: ~504 bytes for 2 requests (58% savings!)4. Server Push
Section titled “4. Server Push”Server can send resources before client asks for them.
Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle HTTP/2 Server Pushskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #dcfce7skinparam noteBorderColor #10b981hide footbox
participant Browserparticipant Server
Browser -> Server: GET /index.html HTTP/2note right Client only requests HTMLend note
Server -> Server: Parse HTML\nFound: <link rel="stylesheet" href="style.css">note left Server knows client will need style.cssend note
Server --> Browser: PUSH_PROMISE: /style.cssnote left Server proactively pushes CSSend note
Server --> Browser: 200 OK\n<html>...</html>Server --> Browser: 200 OK\n/style.css content
Browser -> Browser: Parse HTML\nOh, I need style.css!Browser -> Browser: Already in cache from push!note right No extra request needed Faster page loadend note
legend right ✅ Eliminates round trips ✅ Faster page loads ⚠️ Must be used carefully (can waste bandwidth if wrong)endlegend@enduml5. Stream Prioritization
Section titled “5. Stream Prioritization”Tell server which resources are more important.
Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle HTTP/2 Stream Prioritizationskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #f8fafcskinparam noteBorderColor #94a3b8hide footbox
participant Browserparticipant Server
Browser -> Server: Stream 1: /index.html (Priority: HIGH)Browser -> Server: Stream 2: /style.css (Priority: HIGH)Browser -> Server: Stream 3: /analytics.js (Priority: LOW)Browser -> Server: Stream 4: /ad-banner.png (Priority: LOW)
note right of Browser Browser tells server what's importantend note
Server -> Server: Prioritize Streams 1 & 2note left Server processes high-priority firstend note
Server --> Browser: Stream 1: HTML ✅Server --> Browser: Stream 2: CSS ✅note left Critical rendering path loads firstend note
Server --> Browser: Stream 3: AnalyticsServer --> Browser: Stream 4: Ad banner
legend right Result: Faster perceived load time Critical resources load firstendlegend@endumlHTTP Version Comparison
Section titled “HTTP Version Comparison”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle HTTP Evolution Timelineskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam ArrowColor #334155
left to right direction
rectangle "HTTP/1.0\n(1996)" as HTTP10 #fee2e2rectangle "HTTP/1.1\n(1997)" as HTTP11 #fef3c7rectangle "HTTP/2\n(2015)" as HTTP2 #dcfce7rectangle "HTTP/3\n(2022)" as HTTP3 #e0e7ff
HTTP10 -right-> HTTP11 : + Keep-alive\n+ Host header\n+ PipeliningHTTP11 -right-> HTTP2 : + Multiplexing\n+ Binary\n+ Header compression\n+ Server pushHTTP2 -right-> HTTP3 : + QUIC (UDP)\n+ Better mobile\n+ 0-RTT
note bottom of HTTP10 ❌ New TCP per request ❌ No compression ❌ Text-basedend note
note bottom of HTTP11 ✅ Persistent connections ⚠️ HOL blocking ⚠️ Limited parallelismend note
note bottom of HTTP2 ✅ True parallelism ✅ Binary protocol ✅ Header compression ⚠️ Still TCP (HOL at TCP level)end note
note bottom of HTTP3 ✅ No TCP HOL blocking ✅ Faster connections (QUIC) ✅ Better for lossy networksend note@endumlPerformance Comparison Table
Section titled “Performance Comparison Table”| Feature | HTTP/1.0 | HTTP/1.1 | HTTP/2 |
|---|---|---|---|
| Connection | New per request | Persistent (keep-alive) | Single multiplexed |
| Requests/Connection | 1 | Sequential | Parallel (unlimited) |
| Header Compression | ❌ | ❌ | ✅ (HPACK) |
| Binary Protocol | ❌ | ❌ | ✅ |
| Server Push | ❌ | ❌ | ✅ |
| Stream Priority | ❌ | ❌ | ✅ |
| Head-of-Line Blocking | ✅ (worst) | ✅ (pipelining) | ❌ (at HTTP level) |
| Browser Support | Legacy | Universal | Universal (HTTPS only) |
HTTPS (HTTP Secure)
Section titled “HTTPS (HTTP Secure)”HTTPS is HTTP with encryption via TLS/SSL. It’s not a separate protocol version—it’s HTTP running over an encrypted connection.
HTTPS is the secure version of HTTP, used for web browsing, which encrypts data using the TLS protocol. TLS (Transport Layer Security) is a general-purpose cryptographic protocol that secures communications.
HTTPS is a specific application of TLS for websites. Essentially, HTTPS is a web protocol that relies on TLS to encrypt the connection between your browser and a website.
Key Differences from HTTP:
| Feature | HTTP | HTTPS |
|---|---|---|
| Port | 80 | 443 |
| Encryption | ❌ None | ✅ TLS/SSL |
| Data Visibility | Plaintext (anyone can read) | Encrypted (only endpoints can decrypt) |
| Certificate | Not required | Required (from CA) |
| Browser Indicator | ”Not Secure” warning | 🔒 Padlock icon |
| SEO Ranking | Lower | Higher (Google prefers HTTPS) |
Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle HTTP vs HTTPS Data Flowskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #f8fafcskinparam noteBorderColor #94a3b8hide footbox
participant Browserparticipant Attackerparticipant Server
== HTTP (Insecure) ==Browser -> Server: GET /login?user=john&pass=secret123note right ❌ Plaintext Anyone can read it!end note
Attacker -> Attacker: Intercept packet\nSteal password!note right of Attacker Man-in-the-Middle Attack Password exposedend note
Server --> Browser: 200 OK\nWelcome John!
== HTTPS (Secure) ==Browser -> Server: TLS Handshake\n(Establish encryption)note right Exchange encryption keys Verify server identityend note
Browser -> Server: [Encrypted data]\n0x8f3a2b... (gibberish)note right ✅ Even if intercepted, cannot be readend note
Attacker -> Attacker: Intercept packet\nSee only encrypted bytesnote right of Attacker ❌ Cannot decrypt Password safe!end note
Server --> Browser: [Encrypted response]\n0x2c9d1f...
legend right HTTPS protects: - Passwords and credentials - Personal information - Payment details - Session cookiesendlegend@endumlWhat HTTPS Protects Against
Section titled “What HTTPS Protects Against”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle HTTPS Security Benefitsleft to right directionskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam rectangleBorderColor #94a3b8skinparam rectangleBackgroundColor #f8fafc
package "HTTP Vulnerabilities" { rectangle "Eavesdropping\n(Packet Sniffing)" as V1 #fee2e2 rectangle "Man-in-the-Middle\n(MITM)" as V2 #fee2e2 rectangle "Data Tampering" as V3 #fee2e2 rectangle "Identity Spoofing" as V4 #fee2e2}
package "HTTPS Protections" { rectangle "Encryption\n(TLS)" as P1 #dcfce7 rectangle "Certificate\nValidation" as P2 #dcfce7 rectangle "Integrity\nChecks" as P3 #dcfce7 rectangle "Authentication" as P4 #dcfce7}
V1 -right-> P1 : PreventsV2 -right-> P2 : PreventsV3 -right-> P3 : PreventsV4 -right-> P4 : Prevents
note bottom of V1 Attacker reads WiFi traffic Sees passwords in plaintextend note
note bottom of P1 All data encrypted Only gibberish visibleend note
@endumlTLS (Transport Layer Security)
Section titled “TLS (Transport Layer Security)”TLS is the encryption protocol that powers HTTPS. It evolved from SSL (Secure Sockets Layer).
Timeline:
- SSL 1.0 (1994) - Never released
- SSL 2.0 (1995) - Deprecated (insecure)
- SSL 3.0 (1996) - Deprecated (POODLE attack)
- TLS 1.0 (1999) - Based on SSL 3.0
- TLS 1.1 (2006) - Minor improvements
- TLS 1.2 (2008) - Still widely used
- TLS 1.3 (2018) - Current standard (faster, more secure)
TLS Handshake Process
Section titled “TLS Handshake Process”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle TLS 1.2 Handshake (Simplified)skinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #f8fafcskinparam noteBorderColor #94a3b8hide footbox
participant "Client\n(Browser)" as Clientparticipant "Server\n(example.com)" as Server
== Phase 1: Hello ==Client -> Server: ClientHello\n- TLS version: 1.2\n- Cipher suites: [AES-256-GCM, ChaCha20...]\n- Random bytes (Client Random)note right Client announces: - What encryption it supports - Random data for key generationend note
Server -> Client: ServerHello\n- Chosen cipher: AES-256-GCM\n- Random bytes (Server Random)\n- **Certificate** (Public Key + CA signature)note left Server sends: - Chosen encryption method - Digital certificate (proves identity) - Random dataend note
== Phase 2: Certificate Verification ==Client -> Client: Verify certificate\n1. Check CA signature\n2. Check expiration date\n3. Check domain namenote right ✅ Certificate from trusted CA? ✅ Not expired? ✅ Matches example.com?end note
== Phase 3: Key Exchange ==Client -> Server: ClientKeyExchange\n- Pre-master secret (encrypted with server's public key)note right Client generates random "pre-master secret" and encrypts it with server's public key (only server can decrypt)end note
Client -> Client: Generate Session Keys\n= f(Client Random, Server Random, Pre-master Secret)Server -> Server: Decrypt pre-master secret\nGenerate Session Keysnote right Both sides now have identical session keys (symmetric encryption)end note
== Phase 4: Finish ==Client -> Server: Finished (encrypted with session key)note right ✅ Everything encrypted from now onend note
Server -> Client: Finished (encrypted with session key)
Client -> Server: [Encrypted HTTP Request]\nGET /api/usersServer -> Client: [Encrypted HTTP Response]\n200 OK {"users": [...]}
legend right TLS 1.2 Handshake: ~2 round trips TLS 1.3 Handshake: ~1 round trip (faster!)endlegend@endumlTLS 1.3 Improvements (2018)
Section titled “TLS 1.3 Improvements (2018)”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle TLS 1.2 vs TLS 1.3 Handshake Speedskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #f8fafcskinparam noteBorderColor #94a3b8hide footbox
participant Client12 as "Client\n(TLS 1.2)"participant Server12 as "Server\n(TLS 1.2)"participant Client13 as "Client\n(TLS 1.3)"participant Server13 as "Server\n(TLS 1.3)"
== TLS 1.2 (2 Round Trips) ==Client12 -> Server12: ClientHellonote right Round Trip 1end note
Server12 -> Client12: ServerHello\nCertificate\nKeyExchangenote left Round Trip 2end note
Client12 -> Server12: KeyExchange\nFinished
Server12 -> Client12: Finishednote right ⏱️ ~100-200ms (depending on latency)end note
Client12 -> Server12: HTTP Request (encrypted)
== TLS 1.3 (1 Round Trip) ==Client13 -> Server13: ClientHello\n+ Key Share (guess)note right Client pre-sends key materialend note
Server13 -> Client13: ServerHello\nCertificate\nFinishednote left Server can finish immediatelyend note
Client13 -> Server13: HTTP Request (encrypted)note right ⏱️ ~50-100ms 50% faster!end note
legend right TLS 1.3 Benefits: ✅ Faster handshake (1-RTT) ✅ 0-RTT for repeat connections ✅ Removed weak ciphers ✅ Forward secrecy by defaultendlegend@endumlHow TLS Certificates Work
Section titled “How TLS Certificates Work”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle Certificate Authority (CA) Trust Chainskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam rectangleBorderColor #94a3b8skinparam rectangleBackgroundColor #f8fafc
rectangle "Root CA\n(DigiCert, Let's Encrypt)" as Root #fef3c7note right of Root Pre-installed in browsers Highly trusted Self-signedend note
rectangle "Intermediate CA" as Intermediate #dbeafenote right of Intermediate Signed by Root CA Issues certificatesend note
rectangle "example.com\nCertificate" as Cert #dcfce7note right of Cert Signed by Intermediate CA Contains: - Domain name - Public key - Expiration dateend note
rectangle "Browser" as Browser #e0e7ffnote left of Browser Validates chain: 1. Cert signed by Intermediate? 2. Intermediate signed by Root? 3. Root in trusted list?end note
Root -down-> Intermediate : SignsIntermediate -down-> Cert : SignsBrowser -up-> Cert : Verifies
legend bottom If any link breaks (expired, untrusted CA), browser shows "Your connection is not private"endlegend@endumlCertificate Contents:
Certificate: Subject: CN=example.com Issuer: CN=Let's Encrypt Authority X3 Validity: Not Before: Jan 1 00:00:00 2024 GMT Not After : Apr 1 00:00:00 2024 GMT Public Key: RSA 2048 bit Signature Algorithm: sha256WithRSAEncryption X509v3 Subject Alternative Name: DNS:example.com, DNS:*.example.comSymmetric vs Asymmetric Encryption in TLS
Section titled “Symmetric vs Asymmetric Encryption in TLS”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle TLS Uses Both Encryption Typesleft to right directionskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam rectangleBorderColor #94a3b8skinparam rectangleBackgroundColor #f8fafc
package "Asymmetric (Public Key)\nUsed During Handshake" { rectangle "Server Public Key\n(in certificate)" as PubKey #fef3c7 rectangle "Encrypt\nPre-master Secret" as Encrypt #fef3c7 rectangle "Server Private Key\n(secret, on server)" as PrivKey #fee2e2
PubKey -down-> Encrypt Encrypt -down-> PrivKey : Only server\ncan decrypt
note bottom of PubKey ✅ Secure key exchange ❌ Slow (RSA/ECC) Used once per session end note}
package "Symmetric (Session Key)\nUsed For Data Transfer" { rectangle "Session Key\n(AES-256)" as SessionKey #dcfce7 rectangle "Encrypt HTTP Data" as EncryptData #dcfce7 rectangle "Decrypt HTTP Data" as DecryptData #dcfce7
SessionKey -down-> EncryptData SessionKey -down-> DecryptData
note bottom of SessionKey ✅ Very fast (AES) ✅ Both sides have key Used for all HTTP traffic end note}
Encrypt -right-> SessionKey : Derives
@endumlCommon TLS Cipher Suites
Section titled “Common TLS Cipher Suites”TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384│ │ │ │ │ │ ││ │ │ │ │ │ └─ HMAC algorithm (integrity check)│ │ │ │ │ └───── Mode of operation (Galois/Counter)│ │ │ │ └───────────── Encryption algorithm + key size│ │ │ └───────────────── Symmetric cipher│ │ └────────────────────── Certificate signature algorithm│ └──────────────────────────── Key exchange algorithm└──────────────────────────────── Protocol (TLS)Modern Recommended Cipher Suites (TLS 1.3):
TLS_AES_256_GCM_SHA384TLS_CHACHA20_POLY1305_SHA256TLS_AES_128_GCM_SHA256
Deprecated/Weak (Avoid):
- Anything with
RC4,MD5,DES,3DES TLS_RSA_*(no forward secrecy)
HTTPS + HTTP/2 = Modern Web
Section titled “HTTPS + HTTP/2 = Modern Web”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle Modern HTTPS Connection Stackskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam rectangleBorderColor #94a3b8skinparam rectangleBackgroundColor #f8fafc
rectangle "Application Layer" as App { rectangle "HTTP/2\n(Binary, Multiplexing)" as HTTP2 #dcfce7}
rectangle "Security Layer" as Security { rectangle "TLS 1.3\n(Encryption)" as TLS #fef3c7}
rectangle "Transport Layer" as Transport { rectangle "TCP\n(Reliable Delivery)" as TCP #dbeafe}
rectangle "Network Layer" as Network { rectangle "IP\n(Routing)" as IP #e0e7ff}
App -down-> Security : Encrypted HTTP/2 framesSecurity -down-> Transport : Encrypted packetsTransport -down-> Network : TCP segments
note right of HTTP2 - Multiplexing - Header compression - Server pushend note
note right of TLS - Encrypts all HTTP/2 data - Authenticates server - Protects integrityend note
note right of TCP - Ensures packet delivery - Handles retransmissions - In-order deliveryend note
legend bottom When you visit https://example.com: 1. TCP connection established 2. TLS handshake (verify certificate) 3. HTTP/2 negotiation (ALPN) 4. Encrypted HTTP/2 communicationendlegend@endumlTLS Termination
Section titled “TLS Termination”TLS Termination is the process of decrypting HTTPS traffic at a proxy/load balancer instead of at the backend application server.
Why TLS Termination?
Section titled “Why TLS Termination?”In large-scale applications, handling TLS encryption/decryption directly on application servers can be:
- CPU-intensive: Encryption/decryption consumes significant CPU resources
- Certificate management complexity: Managing certificates across many servers
- Difficult to inspect traffic: Can’t log, monitor, or filter encrypted traffic
Solution: Offload TLS to a dedicated layer (load balancer, reverse proxy, CDN).
Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle Without TLS Termination (End-to-End Encryption)skinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #f8fafcskinparam noteBorderColor #94a3b8hide footbox
participant Clientparticipant "Load\nBalancer" as LBparticipant "App\nServer 1" as App1participant "App\nServer 2" as App2
Client -> LB: HTTPS Request\n[Encrypted]note right TLS connectionend note
LB -> App1: HTTPS Request\n[Still Encrypted]note right Load balancer cannot inspect payloadend note
App1 -> App1: TLS Handshake\nDecrypt\nProcess Requestnote right Each server handles TLS individually CPU overheadend note
App1 -> LB: HTTPS Response\n[Encrypted]LB -> Client: HTTPS Response\n[Encrypted]
legend bottom ❌ Every server does TLS (CPU load) ❌ Certificate on every server ❌ Cannot inspect/log traffic at LBendlegend@endumlError generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle With TLS Termination (Decrypt at Load Balancer)skinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #dcfce7skinparam noteBorderColor #10b981hide footbox
participant Clientparticipant "Load Balancer\n(TLS Termination)" as LBparticipant "App\nServer 1" as App1participant "App\nServer 2" as App2
Client -> LB: HTTPS Request\n[Encrypted]note right TLS connection to load balancerend note
LB -> LB: TLS Handshake\nDecrypt\nInspect Headers\nLog Requestnote right ✅ LB handles TLS ✅ Can inspect traffic ✅ Can add headers ✅ Can rate limitend note
LB -> App1: HTTP Request\n[Plain HTTP]note right Internal network No encryption needed (trusted environment)end note
App1 -> App1: Process Request\n(No TLS overhead)note right ✅ Faster processing ✅ No certificate needed ✅ Less CPU usageend note
App1 -> LB: HTTP Response\n[Plain HTTP]LB -> LB: Encrypt\nAdd HeadersLB -> Client: HTTPS Response\n[Encrypted]
legend bottom ✅ Centralized certificate management ✅ Traffic inspection/logging ✅ Reduced CPU load on app servers ✅ Single TLS configurationendlegend@endumlTLS Termination Patterns
Section titled “TLS Termination Patterns”- TLS Termination - Loadbalancer terminates the TLS encryption, and forwards the HTTP (not HTTPS) to the backend servers
- TLS Pass through - Loadbalancer just proxies the HTTPS request to the backend servers.
- TLS Re-encryption - Loadbalancer decrypts the HTTPS request for logging or filtering purpose, once done, it re-encrypts the current request and sends HTTPS to backend servers.
Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle TLS Termination Patternsleft to right directionskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam rectangleBorderColor #94a3b8skinparam rectangleBackgroundColor #f8fafc
package "Pattern 1: TLS Termination" { rectangle "Client" as C1 #e0e7ff rectangle "Load Balancer\n(Terminates TLS)" as LB1 #fef3c7 rectangle "Backend\n(HTTP)" as B1 #dcfce7
C1 -right-> LB1 : HTTPS LB1 -right-> B1 : HTTP
note bottom of LB1 ✅ Most common ✅ Simple ⚠️ Backend unencrypted end note}
package "Pattern 2: TLS Pass-Through" { rectangle "Client" as C2 #e0e7ff rectangle "Load Balancer\n(TCP Proxy)" as LB2 #dbeafe rectangle "Backend\n(Handles TLS)" as B2 #dcfce7
C2 -right-> LB2 : HTTPS LB2 -right-> B2 : HTTPS
note bottom of LB2 ✅ End-to-end encryption ❌ No traffic inspection ❌ More CPU on backend end note}
package "Pattern 3: TLS Re-Encryption" { rectangle "Client" as C3 #e0e7ff rectangle "Load Balancer\n(Terminates + Re-encrypts)" as LB3 #fef3c7 rectangle "Backend\n(HTTPS)" as B3 #dcfce7
C3 -right-> LB3 : HTTPS LB3 -right-> B3 : HTTPS (new)
note bottom of LB3 ✅ Traffic inspection ✅ Secure backend ⚠️ More complex ⚠️ Higher latency end note}
@endumlReal-World Use Cases
Section titled “Real-World Use Cases”1. Cloud Load Balancers (AWS, Azure, GCP)
Internet → AWS ALB (TLS Termination) → EC2 Instances (HTTP) ↑ Certificate managed by AWS Certificate Manager2. Kubernetes Ingress
Internet → Nginx Ingress (TLS Termination) → Kubernetes Pods (HTTP) ↑ TLS secret stored in Kubernetes3. CDN (Cloudflare, Akamai)
Client → CDN Edge (TLS Termination) → Origin Server (HTTP/HTTPS) ↑ CDN handles TLS Caches static contentSecurity Considerations
Section titled “Security Considerations”When to Use Each Pattern
Section titled “When to Use Each Pattern”| Pattern | Use Case | Pros | Cons |
|---|---|---|---|
| TLS Termination | Most web applications, APIs | Simple, fast, traffic inspection | Backend unencrypted |
| TLS Pass-Through | Zero-trust networks, compliance | End-to-end encryption | No inspection, higher CPU |
| TLS Re-Encryption | Financial services, healthcare | Best security + inspection | Complex, higher latency |
GRPC vs REST
Section titled “GRPC vs REST”If you simply configured a standard REST API to accept application/x-protobuf instead of application/json, you would only gain the serialization benefits (smaller payload size). However, you would miss out on the architectural and transport advantages that make gRPC a standard for microservices.
Here is why gRPC is more than just "REST with Protobuf."
HTTP/2Native (The “Hidden” Performance Booster) Most REST APIs still run onHTTP/1.1(thoughHTTP/2is possible, it is not enforced). gRPC is designed strictly forHTTP/2. This difference fundamentally changes how data moves.
-
Multiplexing: In a standard REST (
HTTP/1.1) call, if you need to fetch 5 resources, browsers or clients often open 5 separate TCP connections. In gRPC, a singleTCPconnection is established, and multiple requests/responses are “multiplexed” (sent simultaneously) over that single channel without blocking each other (Head-of-Line Blocking). -
Header Compression (
HPACK): REST APIs send heavy textual headers (User-Agent, Authorization, etc.) with every single request. gRPC compresses these headers efficiently, which significantly reduces overhead for high-frequency internal calls.
-
Streaming (Beyond Request/Response) Your premise (“making http post calls”) assumes a strict Request-Response model (Client sends one thing, Server sends one thing back).
-
gRPC breaks this paradigm. Because of
HTTP/2framing, gRPC supports: -
Server-side streaming: Client sends one request, server sends back a stream of 100 updates.
-
Client-side streaming: Client uploads a massive file chunk-by-chunk, server replies once when done.
-
Bidirectional streaming: Both sides send data independently in real-time (like a chat app or stock ticker).
-
Implementing bidirectional streaming over standard REST usually requires messy workarounds (Long Polling, WebSockets, or Server-Sent Events), whereas in gRPC, it is a first-class citizen.
- The “Contract First” Workflow (IDL) If you build a REST API with Protobuf manually, you still have to manually maintain the “translation layer.”
-
REST approach: You write the backend code. Then you write an OpenAPI (Swagger) spec (or vice versa). Then you hope the frontend developer reads the documentation correctly. If you change a field name, the client breaks at runtime.
-
gRPC approach: You define a .proto file first. The gRPC tooling generates the code for both the client and the server.
-
The client function
GetUser(id)is generated for you. -
The
serialization/deserializationlogic is generated for you. -
You physically cannot call the API with the wrong parameters because the code won’t compile.
-
- Semantic Differences (Action vs. Resource)
-
REST is Resource-Oriented: It focuses on Nouns.
POST /users,GET /users/123. You are constrained by HTTP verbs (GET, POST, PUT, DELETE). -
gRPC is Action-Oriented (RPC): It focuses on Verbs. It looks like a function call.
service.CreateUser(),service.CalculateRoute(). You aren’t forcing your logic to fit into HTTP verbs; you are just calling functions across the network.
gRPC Architecture and Data Flow
Section titled “gRPC Architecture and Data Flow”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle gRPC Complete Architectureskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam rectangleBorderColor #94a3b8skinparam rectangleBackgroundColor #f8fafcskinparam noteBorderColor #94a3b8skinparam noteBackgroundColor #fef3c7
package "Client Application" { rectangle "App Code\n(Python/Java/Go)" as ClientCode #e0f2fe rectangle "Generated\nClient Stub" as ClientStub #dbeafe rectangle "gRPC Core\n(HTTP/2 + Protobuf)" as ClientCore #bfdbfe}
package "Network Layer" { cloud "HTTP/2\nTCP Connection" as NetworkConn #f8fafc}
package "Server Application" { rectangle "gRPC Core\n(HTTP/2 + Protobuf)" as ServerCore #fce7f3 rectangle "Generated\nServer Stub" as ServerStub #fbcfe8 rectangle "Service\nImplementation" as ServerCode #f9a8d4}
ClientCode -down-> ClientStub : 1. Call method\ngetUser(id=123)note right of ClientCode Developer writes: user = stub.GetUser( UserRequest(id=123) )end note
ClientStub -down-> ClientCore : 2. Serialize to\nProtobuf binarynote right of ClientStub Auto-generated from .proto Handles type conversionend note
ClientCore -down-> NetworkConn : 3. HTTP/2 POST\n:method POST\n:path /UserService/GetUser\ncontent-type: application/grpc+protonote bottom of ClientCore HTTP/2 features: - Single TCP connection - Multiplexing - Header compressionend note
NetworkConn -down-> ServerCore : 4. HTTP/2 stream\n[protobuf binary]
ServerCore -down-> ServerStub : 5. Deserialize\nProtobuf to objectnote left of ServerCore Validates message format Decompresses headersend note
ServerStub -down-> ServerCode : 6. Call service method\nGetUser(request)note left of ServerStub Type-safe method call Generated from .protoend note
ServerCode -up-> ServerStub : 7. Return response\nUser{id:123, name:"John"}note right of ServerCode Business logic Database queries Processingend note
ServerStub -up-> ServerCore : 8. Serialize response\nto Protobuf
ServerCore -up-> NetworkConn : 9. HTTP/2 response\n[protobuf binary]
NetworkConn -up-> ClientCore : 10. HTTP/2 stream
ClientCore -up-> ClientStub : 11. Deserialize\nProtobuf to object
ClientStub -up-> ClientCode : 12. Return User object
legend right gRPC eliminates manual HTTP handling: - No URL construction - No JSON parsing - No manual serialization - Type-safe at compile timeendlegend@endumlPractical Example: Code vs Network
Section titled “Practical Example: Code vs Network”Step 1: Define the Contract (.proto file)
Section titled “Step 1: Define the Contract (.proto file)”syntax = "proto3";
service UserService { rpc GetUser (UserRequest) returns (UserResponse); rpc ListUsers (Empty) returns (stream UserResponse); // Server streaming}
message UserRequest { int32 id = 1;}
message UserResponse { int32 id = 1; string name = 2; string email = 3;}
message Empty {}Step 2: Client Code (Python example)
Section titled “Step 2: Client Code (Python example)”import grpcimport user_pb2import user_pb2_grpc
# Client sits in your application (could be another microservice)channel = grpc.insecure_channel('localhost:50051')stub = user_pb2_grpc.UserServiceStub(channel)
# This looks like a local function call!request = user_pb2.UserRequest(id=123)response = stub.GetUser(request)print(f"User: {response.name}, Email: {response.email}")
# Server streaming examplefor user in stub.ListUsers(user_pb2.Empty()): print(f"Streamed user: {user.name}")Step 3: What Actually Happens on the Wire
Section titled “Step 3: What Actually Happens on the Wire”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle What Happens Under the Hoodskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #f8fafcskinparam noteBorderColor #94a3b8hide footbox
participant "Client App\n(Python)" as Clientparticipant "gRPC Client\nStub" as ClientStubparticipant "HTTP/2\nConnection" as HTTP2participant "gRPC Server" as Serverparticipant "Service\nImplementation" as Service
== Connection Establishment ==Client -> ClientStub: channel = grpc.insecure_channel()ClientStub -> HTTP2: Establish TCP connectionHTTP2 -> Server: TCP handshake + HTTP/2 prefacenote right Single connection reused for all RPC callsend note
== RPC Call: GetUser(id=123) ==Client -> ClientStub: stub.GetUser(UserRequest(id=123))note right of Client Looks like local function call Type-safe, no URL constructionend note
ClientStub -> ClientStub: Serialize to Protobuf\n[0x08 0x7B] (id=123)note right Binary encoding: Field 1 (id), varint, value 123 Much smaller than JSONend note
ClientStub -> HTTP2: HTTP/2 POST\nHeaders:\n :method: POST\n :path: /UserService/GetUser\n content-type: application/grpc+proto\n te: trailers\nBody: [0x08 0x7B]note right HTTP/2 frame format Headers compressed (HPACK) Multiple calls multiplexedend note
HTTP2 -> Server: Binary protobuf payloadServer -> Service: GetUser(UserRequest{id: 123})note left Deserialized automatically Type-safe in server codeend note
Service -> Service: Query databaseService -> Server: User{id:123, name:"John", email:"j@ex.com"}
Server -> HTTP2: HTTP/2 Response\nHeaders:\n :status: 200\n content-type: application/grpc+proto\n grpc-status: 0\nBody: [protobuf binary]note left Serialized User object Compressed responseend note
HTTP2 -> ClientStub: Binary responseClientStub -> ClientStub: Deserialize protobufClientStub -> Client: UserResponse{id:123, name:"John", ...}note right Type-safe object returned No JSON parsing neededend note
== Server Streaming: ListUsers() ==Client -> ClientStub: for user in stub.ListUsers()ClientStub -> HTTP2: POST /UserService/ListUsersHTTP2 -> Server: Request (empty)Server -> Service: ListUsers(Empty{})
loop For each user in database Service -> Server: yield User{...} Server -> HTTP2: HTTP/2 DATA frame note left Multiple responses Same HTTP/2 stream end note HTTP2 -> ClientStub: Protobuf user object ClientStub -> Client: User object Client -> Client: print(user)end
Server -> HTTP2: HTTP/2 trailers (grpc-status: 0)note left End of stream signalend note
legend rightKey Differences from REST:1. Single TCP connection, multiplexed2. Binary protobuf (not JSON)3. Streaming built-in (not bolt-on)4. Type-safe generated code5. No manual HTTP handlingendlegend@endumlWhere Does the Client Sit?
Section titled “Where Does the Client Sit?”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle gRPC Client Deployment Scenariosskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam rectangleBorderColor #94a3b8skinparam rectangleBackgroundColor #f8fafcskinparam cloudBorderColor #10b981skinparam cloudBackgroundColor #ecfdf5
package "Scenario 1: Microservices (Internal)" { rectangle "Order Service\n(gRPC Client)" as OrderClient #e0f2fe rectangle "User Service\n(gRPC Server)" as UserServer #fce7f3 rectangle "Payment Service\n(gRPC Server)" as PaymentServer #fef3c7
OrderClient -right-> UserServer : gRPC call\ngetUser(id) OrderClient -down-> PaymentServer : gRPC call\nprocessPayment()
note bottom of OrderClient Order service acts as CLIENT when calling other services end note}
package "Scenario 2: Mobile/Web App" { rectangle "Browser\n(gRPC-Web Client)" as Browser #dbeafe cloud "gRPC-Web\nProxy (Envoy)" as Proxy #d1fae5 rectangle "Backend API\n(gRPC Server)" as Backend #fce7f3
Browser -right-> Proxy : HTTP/1.1 + Base64\n(browser-compatible) Proxy -right-> Backend : HTTP/2 + Binary\n(native gRPC)
note bottom of Browser Browsers can't: - Access raw HTTP/2 frames - Send trailers (in some cases) - Use bidirectional streaming end note
note bottom of Proxy Proxy translates: gRPC-Web ↔ gRPC end note}
package "Scenario 3: API Gateway Pattern" { rectangle "API Gateway\n(gRPC Client + HTTP Server)" as Gateway #f3e8ff rectangle "Service A\n(gRPC Server)" as ServiceA #fce7f3 rectangle "Service B\n(gRPC Server)" as ServiceB #fef3c7 rectangle "External Client\n(REST)" as External #e0e7ff
External -down-> Gateway : REST/HTTP Gateway -down-> ServiceA : gRPC Gateway -down-> ServiceB : gRPC
note bottom of Gateway Gateway translates REST to gRPC Internal services use gRPC end note}
package "Scenario 4: CLI Tool / Batch Job" { rectangle "Admin CLI\n(gRPC Client)" as CLI #dbeafe rectangle "Backend Service\n(gRPC Server)" as BackendService #fce7f3
CLI -right-> BackendService : gRPC call\nadminOperation()
note bottom of CLI Command-line tools can be gRPC clients for automation end note}
legend rightgRPC Client can be:- Another microservice (most common)- Mobile app (via gRPC-Web)- Web app (via gRPC-Web)- CLI tool / script- API Gateway- Batch job / workerendlegend@endumlBinary Payload Comparison
Section titled “Binary Payload Comparison”REST/JSON Request:
POST /api/users/123 HTTP/1.1Host: example.comContent-Type: application/jsonAuthorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...User-Agent: Mozilla/5.0...
{"id": 123, "name": "John Doe", "email": "john@example.com"}Size: ~350 bytes (headers + JSON)
gRPC/Protobuf Request:
:method: POST:path: /UserService/GetUsercontent-type: application/grpc+proto
[Binary: 0x08 0x7B] // Just 2 bytes for id=123!Size: ~80 bytes (compressed headers + protobuf)
Why Browsers Don’t Fully Support gRPC
Section titled “Why Browsers Don’t Fully Support gRPC”Browsers have fundamental limitations that prevent native gRPC support:
1. No Access to Raw HTTP/2 Frames
Section titled “1. No Access to Raw HTTP/2 Frames”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle Browser Limitations with HTTP/2skinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam rectangleBorderColor #94a3b8skinparam rectangleBackgroundColor #f8fafcskinparam noteBorderColor #ef4444skinparam noteBackgroundColor #fee2e2
package "What gRPC Needs" { rectangle "Direct HTTP/2\nFrame Control" as Need1 #fecaca rectangle "Custom Headers\n(trailers)" as Need2 #fecaca rectangle "Bidirectional\nStreaming" as Need3 #fecaca}
package "What Browser Provides" { rectangle "Fetch API" as Fetch #dbeafe rectangle "XMLHttpRequest" as XHR #dbeafe rectangle "WebSocket" as WS #dbeafe}
note right of Need1 gRPC needs to control HTTP/2 frame types directlyend note
note left of Fetch Fetch API abstracts away low-level HTTP/2 detailsend note
Fetch -[hidden]right-> Need1XHR -[hidden]right-> Need2WS -[hidden]right-> Need3
note bottom Browser security model prevents direct HTTP/2 frame manipulationend note@endumlProblem: Browsers provide high-level APIs (fetch, XMLHttpRequest) that abstract HTTP/2. gRPC needs direct control over HTTP/2 frames to:
- Send custom frame types
- Control flow control windows
- Manage stream priorities
2. HTTP Trailers Limitation
Section titled “2. HTTP Trailers Limitation”gRPC relies heavily on HTTP trailers to send metadata after the response body (like error codes, status).
// Normal gRPC response with trailersHTTP/2 200 OKcontent-type: application/grpc+proto[response data - streaming]grpc-status: 0 ← Trailer (sent AFTER body)grpc-message: Success ← TrailerBrowser Issue:
fetch()API doesn’t expose trailers in most browsers- Even with HTTP/2, trailers are often ignored or inaccessible in JavaScript
3. Bidirectional Streaming
Section titled “3. Bidirectional Streaming”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle gRPC Streaming Typesskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #f8fafcskinparam noteBorderColor #94a3b8hide footbox
participant "Browser" as Browserparticipant "Server" as Server
== Server Streaming (Supported in gRPC-Web) ==Browser -> Server: Requestnote right: Single requestServer --> Browser: Response 1Server --> Browser: Response 2Server --> Browser: Response 3note left: Multiple responsesnote right of Browser Fetch API supports this (Server-Sent Events style)end note
== Client Streaming (NOT in gRPC-Web) ==Browser -> Server: Request 1Browser -> Server: Request 2Browser -> Server: Request 3note right: Multiple requestsServer --> Browser: Single responsenote right of Browser ❌ Fetch API limitation Can't send multiple requests on same streamend note
== Bidirectional Streaming (NOT in gRPC-Web) ==Browser -> Server: Request 1Server --> Browser: Response 1Browser -> Server: Request 2Server --> Browser: Response 2note right of Browser ❌ Would need WebSocket Not true HTTP/2 gRPCend note
legend right Browsers can only do: - Unary (request/response) - Server streaming
Cannot do: - Client streaming - Bidirectional streamingendlegend@endumlgRPC-Web: The Browser Solution
Section titled “gRPC-Web: The Browser Solution”gRPC-Web is a modified protocol that works within browser constraints.
Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle gRPC-Web Proxy Architectureskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam sequenceArrowColor #334155skinparam sequenceParticipantBorderColor #94a3b8skinparam sequenceParticipantBackgroundColor #f8fafcskinparam noteBackgroundColor #f8fafcskinparam noteBorderColor #94a3b8hide footbox
participant "Browser\n(JavaScript)" as Browserparticipant "gRPC-Web\nProxy (Envoy)" as Proxyparticipant "Backend\n(gRPC Server)" as Backend
== Request Flow ==Browser -> Browser: Generate JS stub\nfrom .protonote right of Browser Code generation still works But different wire formatend note
Browser -> Proxy: HTTP/1.1 POST\nContent-Type: application/grpc-web+proto\nBody: [Base64 encoded protobuf]note right of Browser Uses regular HTTP/1.1 Base64 encoding (not binary) Trailers in body (not headers)end note
Proxy -> Proxy: Decode Base64\nExtract trailers from bodynote right of Proxy Proxy does the translation Envoy, Nginx, or custom proxyend note
Proxy -> Backend: HTTP/2 gRPC\nBinary protobuf\nReal trailersnote right of Proxy Converts to native gRPC Full HTTP/2 featuresend note
Backend -> Backend: Process requestBackend -> Proxy: gRPC response\n+ trailers
Proxy -> Proxy: Encode to Base64\nEmbed trailers in bodynote left of Proxy Converts back to gRPC-Web formatend note
Proxy -> Browser: HTTP/1.1 Response\nBase64 encoded\nTrailers in bodynote left of Browser Browser-compatible format Can use fetch() APIend note
legend right gRPC-Web Compromises: ✅ Unary calls work ✅ Server streaming works ❌ Client streaming doesn't work ❌ Bidirectional streaming doesn't work ⚠️ Larger payload (Base64 overhead)endlegend@endumlWhat gRPC-Web Proxy Does
Section titled “What gRPC-Web Proxy Does”Envoy Proxy Configuration Example:
http_filters: - name: envoy.filters.http.grpc_web typed_config: "@type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWebTranslation:
- Request: Base64 protobuf → Binary protobuf
- Headers: Browser-safe headers → gRPC headers
- Trailers: Extract from body → Put in HTTP/2 trailers
- Response: Binary → Base64, Trailers → Body
Why gRPC is Not Popular with Web Apps (Client-Side)
Section titled “Why gRPC is Not Popular with Web Apps (Client-Side)”Error generating PlantUML diagram: connect ECONNREFUSED 127.0.0.1:8080
@startumltitle gRPC vs REST for Web Clientsleft to right directionskinparam backgroundColor #ffffffskinparam Shadowing falseskinparam DefaultFontName Arialskinparam DefaultFontSize 13skinparam rectangleBorderColor #94a3b8skinparam rectangleBackgroundColor #f8fafc
package "REST Advantages for Web" { rectangle "Native Browser\nSupport" as REST1 #dcfce7 rectangle "Easy Debugging\n(DevTools)" as REST2 #dcfce7 rectangle "CDN Cacheable" as REST3 #dcfce7 rectangle "No Proxy Needed" as REST4 #dcfce7 rectangle "Human Readable" as REST5 #dcfce7}
package "gRPC-Web Disadvantages" { rectangle "Needs Proxy\n(Envoy/Nginx)" as GRPC1 #fee2e2 rectangle "Larger Payloads\n(Base64)" as GRPC2 #fee2e2 rectangle "Limited Streaming" as GRPC3 #fee2e2 rectangle "Binary Debugging\nHarder" as GRPC4 #fee2e2 rectangle "No Browser Cache" as GRPC5 #fee2e2}
note right of REST1 fetch() works out of the box No translation neededend note
note left of GRPC1 Must deploy and maintain additional proxy layerend note
note right of REST3 GET requests cacheable by CDNs (Cloudflare, etc.)end note
note left of GRPC5 POST requests only Can't leverage HTTP cacheend note
@endumlReal-World Comparison
Section titled “Real-World Comparison”REST API (Direct from Browser):
// Works everywhere, no setupconst response = await fetch('https://api.example.com/users/123');const user = await response.json();console.log(user); // Easy to debug in DevToolsgRPC-Web (Requires Proxy + Code Gen):
// 1. Need to deploy Envoy proxy// 2. Generate JS stubs from .proto// 3. Import generated codeimport {UserServiceClient} from './generated/user_grpc_web_pb';import {UserRequest} from './generated/user_pb';
const client = new UserServiceClient('https://api.example.com');const request = new UserRequest();request.setId(123);
client.getUser(request, {}, (err, response) => { console.log(response.toObject()); // Binary, harder to debug});When to Use gRPC-Web vs REST
Section titled “When to Use gRPC-Web vs REST”| Use Case | Recommended |
|---|---|
| Public API for web apps | REST/JSON |
| Internal microservices | gRPC (native) |
| Mobile apps (native) | gRPC (native) |
| Real-time dashboards (server streaming) | gRPC-Web ⚠️ |
| Simple CRUD operations | REST/JSON |
| Backend-to-backend (Node.js, Go server) | gRPC (native) |
Why REST Dominates Web Clients
Section titled “Why REST Dominates Web Clients”Web App Priorities gRPC-Web REST/JSON─────────────────────────────────────────────────Simple setup ❌ ✅Works everywhere ⚠️ ✅Easy debugging (DevTools) ❌ ✅CDN caching ❌ ✅No extra infrastructure ❌ ✅Human-readable payloads ❌ ✅Bidirectional streaming ❌ ❌*Type safety ✅ ⚠️**
* Use WebSockets for real-time bidirectional** Can add TypeScript types manuallyBottom Line: For browser-based web apps, REST/JSON remains king because it’s simpler and doesn’t require proxy infrastructure. gRPC shines for backend microservices where you control both ends and can use native gRPC.