Open Service Mesh AKS add-on
AKS preview features are available on a self-service, opt-in basis. Previews are provided “as is” and “as available,” and they’re excluded from the service-level agreements and limited warranty. AKS previews are partially covered by customer support on a best-effort basis. As such, these features aren’t meant for production use.
Install osm
Register the AKS-OpenServiceMesh feature flag
az feature register --namespace "Microsoft.ContainerService" --name "AKS-OpenServiceMesh"
Enable the OSM add-on to existing AKS cluster
az aks enable-addons --addons open-service-mesh -g <my-osm-aks-cluster-rg> -n <my-osm-aks-cluster-name>
Subscriptions and Resource group
Subscriptions
By publishing APIs through API Management, you can easily secure API access using subscription keys. Consume the published APIs by including a valid subscription key in the HTTP requests when calling to those APIs.
Managed Identity
Resource group
A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to allocate resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.
availability_zones(Optional/default is all zone)
A list of Availability Zones where the Nodes in this Node Pool should be created in.
network
Kubenet Routing/NAT
address_space (eg.192.168.0.0/16) is in different network fragment with pod_cidr(eg. 10.244.0.0/16) An additional hop is required in the design of kubenet, which adds minor latency to pod communication. Azure supports a maximum of 400 routes in a UDR, so you can’t have an AKS cluster larger than 400 nodes.
The pod IP address range is used to assign a /24 address space to each node in the cluster. In the following example, the --pod-cidr of 10.244.0.0/16 assigns the first node 10.244.0.0/24, the second node 10.244.1.0/24, and the third node 10.244.2.0/24.
automatic_channel_upgrade(Optional/default value is none)
Enable Feature Microsoft.ContainerService/AutoUpgradePreview firstly
az feature register --namespace Microsoft.ContainerService -n AutoUpgradePreview
create network
az network vnet create \ --resource-group myResourceGroup \ --name myAKSVnet \ --address-prefixes 192.168.0.0/16 \ --subnet-name myAKSSubnet \ --subnet-prefix 192.168.1.0/24
create cluster
az aks create \ --resource-group myResourceGroup \ --name myAKSCluster \ --node-count 3 \ --network-plugin kubenet \ --service-cidr 10.0.0.0/16 \ --dns-service-ip 10.0.0.10 \ --pod-cidr 10.244.0.0/16 \ --docker-bridge-address 172.17.0.1/16 \ --vnet-subnet-id $SUBNET_ID \ --service-principal <appId> \ --client-secret <password>
Azure CNI Bridge
create the virtual network with two subnets
resorceGroup="myResourceGroup"
vnet="myVirtualNetwork"
location="westcentralus"
# Create the resource group
az group create --name $resourceGroup --location $location
# Create our two subnet network
az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefixes 10.240.0.0/16 -o none
az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name podsubnet --address-prefixes 10.241.0.0/16 -o none
create cluster
clusterName="myAKSCluster"
subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
az aks create -n $clusterName -g $resourceGroup -l $location \
--max-pods 250 \
--node-count 2 \
--network-plugin azure \
--vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet \
--pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/podsubnet
Choose a network model to use
The choice of which network plugin to use for your AKS cluster is usually a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
Use kubenet when:
You have limited IP address space. Most of the pod communication is within the cluster. You don’t need advanced AKS features such as virtual nodes or Azure Network Policy. Use Calico network policies. Use Azure CNI when:
You have available IP address space. Most of the pod communication is to resources outside of the cluster. You don’t want to manage user defined routes for pod connectivity. You need AKS advanced features such as virtual nodes or Azure Network Policy. Use Calico network policies.
For more information to help you decide which network model to use, see .
HTTP routing solution overview
The add-on deploys two components: a Kubernetes Ingress controller and an External-DNS controller.
: The Ingress controller is exposed to the internet by using a Kubernetes service of type LoadBalancer. The Ingress controller watches and implements Kubernetes Ingress resources, which creates routes to application endpoints. : Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone.
Network policy
Network policy options in AKS Azure provides two ways to implement network policy. You choose a network policy option when you create an AKS cluster. The policy option can’t be changed after the cluster is created:
Azure’s own implementation, called Azure Network Policies. Calico Network Policies, an open-source network and network security solution founded by Tigera.
Both implementations use to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable filter rules.
value upgrade
stable | rapid | |
---|---|---|
k8s version | 1.20.9 | 1.20.9 |
crt | docker | containerd |
default node pool(system node pool)
only_critical_addons_enabled - (Optional)
For a system node pool, AKS automatically assigns the label . This causes AKS to prefer scheduling system pods on node pools that contain this label.
System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and metrics-server. User node pools serve the primary purpose of hosting your application pods.
To ensure your cluster operates reliably, you should run at least 2 (two) nodes in the default node pool, as .
add taint that only let default node pool only be used for sysmtem pods
Enabling this option will default node pool with CriticalAddonsOnly=true:NoSchedule
taint. Changing this forces a new resource to be created.