- Azure Load Balancer can scale applications and create high availability for services.
- It supports inbound and outbound scenarios, provides low latency and high throughput, and scales up to millions of flows for all TCP and UDP applications.
- For "Transport Layer Security (TLS) protocol termination" ("SSL offload") or "per-HTTP/HTTPS request" and "application-layer processing", use Application Gateway.
- For "global DNS load balancing", use Traffic Manager.
- A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual network by translating their private IP addresses to public IP addresses.
- A load balancer resource can exist as either a public load balancer or an internal load balancer.
- Public load balancer handles the incoming internet traffic to your VMs
- With internal load balancer. you can load-balance traffic across VMs inside a virtual network.
- You can also reach a load balancer front end from an on-premises network in a hybrid scenario.
- You can port forward traffic to a specific port on specific VMs with inbound network address translation (NAT) rules.
- The load balancer resource's functions are expressed as a front end, a rule, a health probe, and a backend pool definition.
- Load balancer resources are objects within which you can express how Azure should program its multi-tenant infrastructure to achieve the scenario that you want to create.
- You place VMs into the backend pool by specifying the backend pool from the VM.
- There is no direct relationship between load balancer resources and actual infrastructure.
- Creating a load balancer doesn't create an instance, and capacity is always available.
- Load Balancer instantly reconfigures itself when you scale instances up or down.
- You can add/remove VMs from the backend pool reconfigures the load balancer without additional operations on the load balancer resource.
- Load Balancer uses a hash-based algorithm for distribution of inbound flows and rewrites the headers of flows to backend pool instances accordingly.
- A server is available to receive new flows when a health probe indicates a healthy backend endpoint.
- By default, Load Balancer uses a 5-tuple hash composed of source IP, source port, destination IP, destination port, and IP protocol number to map flows to available servers.
- You can choose to create affinity to a specific source IP address by opting into a 2 or 3-tuple hash.
- All packets of the same packet flow arrive on the same instance behind the load-balanced front end.
- When the client initiates a new flow from the same source IP, the source port changes.
- As a result, the 5-tuple might cause the traffic to go to a different backend endpoint.
Port Forwarding
- You can create an inbound NAT rule to port forward traffic from a specific port of a specific frontend IP address to a specific port of a specific backend instance inside the virtual network.
- This is also accomplished by the same hash-based distribution as load balancing.
- Common scenarios for this capability are RDP or Secure Shell (SSH) sessions to individual VM instances inside the Azure Virtual Network.
- You can map multiple internal endpoints to the various ports on the same frontend IP address.
- You can use them to remotely administer your VMs over the internet without the need for an additional jump box.
Application Agnostic and Transparent
- Load Balancer does not directly interact with TCP or UDP or the application layer, and any TCP or UDP application scenario can be supported.
- Load Balancer does not terminate or originate flows, interact with the payload of the flow, provides no application layer gateway function, and protocol handshakes always occur directly between the client and the backend pool instance.
- A response to an inbound flow is always a response from a virtual machine.