From 386c0ccc1dd1f73d8b8e2b4beb337a471dbccebc Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Mon, 11 May 2026 09:42:19 +0000 Subject: [PATCH] chore: update generated llms.txt files --- static/calico-cloud/llms-full.txt | 389 +- static/calico-cloud/llms.txt | 124 +- static/calico-enterprise/llms-full.txt | 17106 +++++++++++++---------- static/calico-enterprise/llms.txt | 216 +- static/calico/llms-full.txt | 653 +- static/calico/llms.txt | 242 +- static/llms.txt | 12 +- 7 files changed, 10648 insertions(+), 8094 deletions(-) diff --git a/static/calico-cloud/llms-full.txt b/static/calico-cloud/llms-full.txt index 6c273902f1..96df180154 100644 --- a/static/calico-cloud/llms-full.txt +++ b/static/calico-cloud/llms-full.txt @@ -406,19 +406,19 @@ Start a free trial or request a demo to see Calico in action. ##### [Overview](https://docs.tigera.io/calico-cloud/free/overview) -[Overview of Calico Cloud Free Tier](https://docs.tigera.io/calico-cloud/free/overview) +[What Calico Cloud Free Tier includes and excludes versus paid Calico Cloud — limits, supported platforms, and the upgrade path to a paid subscription.](https://docs.tigera.io/calico-cloud/free/overview) ##### [Quickstart](https://docs.tigera.io/calico-cloud/free/quickstart) -[Quickstart guide for Calico Cloud Free Tier.](https://docs.tigera.io/calico-cloud/free/quickstart) +[Quickstart that connects a Kubernetes cluster to Calico Cloud Free Tier for centralized network observability — no payment or trial required.](https://docs.tigera.io/calico-cloud/free/quickstart) ##### [Connect a cluster](https://docs.tigera.io/calico-cloud/free/connect-cluster-free) -[Securely connect your cluster to Calico Cloud Free Tier to access centralized network observability for your Kubernetes deployment.](https://docs.tigera.io/calico-cloud/free/connect-cluster-free) +[Connect a Kubernetes cluster to Calico Cloud Free Tier so it reports network observability data to a Tigera-managed dashboard.](https://docs.tigera.io/calico-cloud/free/connect-cluster-free) ##### [Remove a cluster](https://docs.tigera.io/calico-cloud/free/disconnect-cluster-free) -[Disconnect and remove your cluster from Calico Cloud Free Tier](https://docs.tigera.io/calico-cloud/free/disconnect-cluster-free) +[Disconnect a Kubernetes cluster from Calico Cloud Free Tier and remove the Calico Cloud components it installed.](https://docs.tigera.io/calico-cloud/free/disconnect-cluster-free) ### Calico Cloud Free Tier @@ -941,61 +941,61 @@ Requirements and guides for connecting your Kubernetes cluster to Calico Cloud. ##### [Calico Cloud architecture](https://docs.tigera.io/calico-cloud/get-started/cc-arch-diagram) -[Understand the main components of Calico Cloud.](https://docs.tigera.io/calico-cloud/get-started/cc-arch-diagram) +[Architecture overview of Calico Cloud — components that run in the connected cluster and the SaaS-side services they communicate with.](https://docs.tigera.io/calico-cloud/get-started/cc-arch-diagram) ##### [What happens when you connect a cluster to Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/connect-cluster) -[Get answers to your questions about connecting to Calico Cloud.](https://docs.tigera.io/calico-cloud/get-started/connect-cluster) +[What happens when you connect a Kubernetes cluster to Calico Cloud — what is installed, what data leaves the cluster, and what changes in the cluster.](https://docs.tigera.io/calico-cloud/get-started/connect-cluster) ##### [System requirements](https://docs.tigera.io/calico-cloud/get-started/system-requirements) -[Review cluster requirements to connect to Calico Cloud.](https://docs.tigera.io/calico-cloud/get-started/system-requirements) +[Cluster, platform, and version requirements a Kubernetes cluster must meet before it can connect to Calico Cloud.](https://docs.tigera.io/calico-cloud/get-started/system-requirements) ##### [Prepare your cluster for Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/prepare-cluster) -[Prepare your cluster to install Calico Cloud.](https://docs.tigera.io/calico-cloud/get-started/prepare-cluster) +[Prepare a Kubernetes cluster for connection to Calico Cloud — pre-flight checks, RBAC, and image-pull configuration.](https://docs.tigera.io/calico-cloud/get-started/prepare-cluster) ##### [Limitations and known issues for Windows nodes](https://docs.tigera.io/calico-cloud/get-started/windows-limitations) -[Review limitations before starting installation.](https://docs.tigera.io/calico-cloud/get-started/windows-limitations) +[Known limitations for Calico Cloud on Windows worker nodes that you should review before planning a connection.](https://docs.tigera.io/calico-cloud/get-started/windows-limitations) ## Connect your cluster[​](#connect-your-cluster) ##### [Install Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/install-cluster) -[Steps to connect your cluster to Calico Cloud.](https://docs.tigera.io/calico-cloud/get-started/install-cluster) +[Connect a Kubernetes cluster to Calico Cloud using the standard install command from the management UI.](https://docs.tigera.io/calico-cloud/get-started/install-cluster) ##### [Set up a private registry](https://docs.tigera.io/calico-cloud/get-started/setup-private-registry) -[Add images to a private registry for installing Calico Cloud on a cluster.](https://docs.tigera.io/calico-cloud/get-started/setup-private-registry) +[Mirror Calico Cloud container images into a private registry so air-gapped clusters can install without reaching the public registry.](https://docs.tigera.io/calico-cloud/get-started/setup-private-registry) ##### [Install using a private registry](https://docs.tigera.io/calico-cloud/get-started/install-private-registry) -[Steps to connect your cluster to Calico Cloud.](https://docs.tigera.io/calico-cloud/get-started/install-private-registry) +[Connect a Kubernetes cluster to Calico Cloud when its container images must be pulled from a private registry.](https://docs.tigera.io/calico-cloud/get-started/install-private-registry) ##### [Install Calico Cloud as part of an automated workflow](https://docs.tigera.io/calico-cloud/get-started/install-automated) -[Install Calico Cloud as part of an automated workflow.](https://docs.tigera.io/calico-cloud/get-started/install-automated) +[Connect a Kubernetes cluster to Calico Cloud as part of an automated CI or provisioning workflow rather than the interactive UI flow.](https://docs.tigera.io/calico-cloud/get-started/install-automated) ##### [Prepare your cluster for Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/prepare-cluster) -[Prepare your cluster to install Calico Cloud.](https://docs.tigera.io/calico-cloud/get-started/prepare-cluster) +[Prepare a Kubernetes cluster for connection to Calico Cloud — pre-flight checks, RBAC, and image-pull configuration.](https://docs.tigera.io/calico-cloud/get-started/prepare-cluster) ## Troubleshooting[​](#troubleshooting) ##### [Tigera Operator troubleshooting checklist](https://docs.tigera.io/calico-cloud/get-started/operator-checklist) -[Additional troubleshooting for the Tigera Operator.](https://docs.tigera.io/calico-cloud/get-started/operator-checklist) +[Troubleshoot the Tigera Operator on Calico Cloud connected clusters when the standard support checklist is not enough.](https://docs.tigera.io/calico-cloud/get-started/operator-checklist) ##### [Troubleshooting checklist](https://docs.tigera.io/calico-cloud/get-started/checklist) -[Review this checklist before opening a Support ticket.](https://docs.tigera.io/calico-cloud/get-started/checklist) +[Gather information and run pre-flight checks before opening a Calico Cloud support ticket so triage moves quickly.](https://docs.tigera.io/calico-cloud/get-started/checklist) ## Upgrade[​](#upgrade) ##### [Upgrade Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/upgrade-cluster) -[Steps to upgrade to the latest version of Calico Cloud.](https://docs.tigera.io/calico-cloud/get-started/upgrade-cluster) +[Upgrade a connected Calico Cloud cluster to the latest released version of the in-cluster components.](https://docs.tigera.io/calico-cloud/get-started/upgrade-cluster) ### Calico Cloud architecture @@ -7265,87 +7265,87 @@ Writing network policies is how you restrict traffic to pods in your Kubernetes ##### [Policy best practices](https://docs.tigera.io/calico-cloud/network-policy/policy-best-practices) -[Learn policy best practices for security, scalability, and performance.](https://docs.tigera.io/calico-cloud/network-policy/policy-best-practices) +[Best practices for Calico Cloud policy across connected clusters — security posture, scalability with tiers, and performance tuning under load.](https://docs.tigera.io/calico-cloud/network-policy/policy-best-practices) ##### [Enable a default deny policy for Kubernetes pods](https://docs.tigera.io/calico-cloud/network-policy/beginners/kubernetes-default-deny) -[Create a default deny network policy so pods that are missing policy are not allowed traffic until appropriate network policy is defined.](https://docs.tigera.io/calico-cloud/network-policy/beginners/kubernetes-default-deny) +[Apply a default-deny network policy in a Calico Cloud connected cluster so unprotected pods are denied traffic until explicit policy is written.](https://docs.tigera.io/calico-cloud/network-policy/beginners/kubernetes-default-deny) ##### [Get started with Calico network policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-network-policy) -[Create your first Calico network policies. Shows the rich features using sample policies that extend native Kubernetes network policy.](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-network-policy) +[Write your first Calico Cloud NetworkPolicy — sample policies that exercise the rich rule features beyond Kubernetes NetworkPolicy.](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-network-policy) ##### [Get started with network sets](https://docs.tigera.io/calico-cloud/network-policy/networksets) -[Learn the power of network sets and why you should create them.](https://docs.tigera.io/calico-cloud/network-policy/networksets) +[Use Calico Cloud network sets to package frequently reused IP ranges or domains into named selectors that policies can reference across connected clusters.](https://docs.tigera.io/calico-cloud/network-policy/networksets) ##### [DNS policy](https://docs.tigera.io/calico-cloud/network-policy/domain-based-policy) -[Use domain names to allow traffic to destinations outside of a cluster by their DNS names instead of by their IP addresses.](https://docs.tigera.io/calico-cloud/network-policy/domain-based-policy) +[Allow traffic to external destinations by DNS name using Calico Cloud domain-based policy rules — without maintaining static IP lists.](https://docs.tigera.io/calico-cloud/network-policy/domain-based-policy) ## Policy rules[​](#policy-rules) ##### [Basic rules](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/policy-rules-overview) -[Define network connectivity for Calico endpoints using policy rules and label selectors.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/policy-rules-overview) +[How to write policy rules in Calico Cloud — label selectors, source and destination match criteria, and rule actions.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/policy-rules-overview) ##### [Use namespace rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/namespace-policy) -[Use namespaces and namespace selectors in Calico network policy to group or separate resources. Use network policies to allow or deny traffic to/from pods that belong to specific namespaces.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/namespace-policy) +[Group or separate workloads in Calico Cloud policy using namespaces and namespace selectors so policies apply only to specified namespaces.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/namespace-policy) ##### [Use service rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-policy) -[Use Kubernetes Service names in policy rules.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-policy) +[Match on Kubernetes Service names in Calico Cloud policy rules instead of specific pod selectors.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-policy) ##### [Use service accounts rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-accounts) -[Use Kubernetes service accounts in policies to validate cryptographic identities and/or manage RBAC controlled high-priority rules across teams.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-accounts) +[Match on Kubernetes service accounts in Calico Cloud policy rules to validate workload identity and apply RBAC-controlled rules.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-accounts) ##### [Use external IPs or networks rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/external-ips-policy) -[Limit egress and ingress traffic using IP address either directly within Calico network policy or managed as Calico network sets.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/external-ips-policy) +[Restrict egress and ingress to specific IP ranges in Calico Cloud policy, either inline or via reusable network sets.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/external-ips-policy) ##### [Use ICMP/ping rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/icmp-ping) -[Control where ICMP/ping is used by creating a Calico network policy to allow and deny ICMP/ping messages for workloads and host endpoints.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/icmp-ping) +[Allow or deny ICMP and ping traffic for Calico Cloud workloads and host endpoints using policy rules.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/icmp-ping) ## Policy tiers[​](#policy-tiers) ##### [Get started with policy tiers](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/tiered-policy) -[Understand how tiered policy works and supports microsegmentation.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/tiered-policy) +[How tiered policy works in Calico Cloud — evaluation order, pass actions, and using tiers to enforce microsegmentation across connected clusters.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/tiered-policy) ##### [Change allow-tigera tier behavior](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/allow-tigera) -[Understand how to change the behavior of the allow-tigera tier.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/allow-tigera) +[Customize the behavior of the allow-tigera tier that Calico Cloud installs by default to keep its own components reachable.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/allow-tigera) ##### [Network policy tutorial](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/policy-tutorial-ui) -[Covers the basics of Calico Cloud network policy.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/policy-tutorial-ui) +[Tutorial for the Calico Cloud policy management UI — author, order, and stage policies inside tiers from the web console.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/policy-tutorial-ui) ##### [Configure RBAC for tiered policies](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/rbac-tiered-policies) -[Configure RBAC to control access to policies and tiers.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/rbac-tiered-policies) +[Configure Kubernetes RBAC to control which users can edit Calico Cloud policies in each tier across connected clusters.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/rbac-tiered-policies) ## Policy for services[​](#policy-for-services) ##### [Apply Calico Cloud policy to Kubernetes node ports](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/kubernetes-node-ports) -[Restrict access to Kubernetes node ports using Calico Cloud global network policy. Follow the steps to secure the host, the node ports, and the cluster.](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/kubernetes-node-ports) +[Restrict access to Kubernetes NodePort services using a Calico Cloud GlobalNetworkPolicy at the host endpoint.](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/kubernetes-node-ports) ##### [Apply Calico Cloud policy to services exposed externally as cluster IPs](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/services-cluster-ips) -[Expose Kubernetes service cluster IPs over BGP using Calico Cloud, and restrict who can access them using Calico Cloud network policy.](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/services-cluster-ips) +[Expose Kubernetes Service ClusterIPs over BGP using Calico Cloud and restrict who can reach them with network policy.](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/services-cluster-ips) ## Policy for extreme traffic[​](#policy-for-extreme-traffic) ##### [Enable extreme high-connection workloads](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/high-connection-workloads) -[Create a Calico network policy rule to bypass Linux conntrack for traffic to workloads that experience extremely large number of connections.](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/high-connection-workloads) +[Bypass Linux conntrack with a Calico Cloud policy rule for workloads that handle an extreme number of concurrent connections.](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/high-connection-workloads) ##### [Defend against DoS attacks](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/defend-dos-attack) -[Define DoS mitigation rules in Calico Cloud policy to quickly drop connections when under attack. Learn how rules use eBPF and XDP, including hardware offload when available.](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/defend-dos-attack) +[Define DoS mitigation rules in Calico Cloud policy that drop connections at the eBPF or XDP layer, with hardware offload when available.](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/defend-dos-attack) ### Policy recommendations @@ -7353,11 +7353,11 @@ Writing network policies is how you restrict traffic to pods in your Kubernetes ## [📄️Enable policy recommendations](https://docs.tigera.io/calico-cloud/network-policy/recommendations/policy-recommendations) -[Enable continuous policy recommendations to secure unprotected namespaces or workloads.](https://docs.tigera.io/calico-cloud/network-policy/recommendations/policy-recommendations) +[Run continuous Calico Cloud policy recommendations so unprotected namespaces and workloads pick up baseline policy automatically.](https://docs.tigera.io/calico-cloud/network-policy/recommendations/policy-recommendations) ## [📄️Policy recommendations tutorial](https://docs.tigera.io/calico-cloud/network-policy/recommendations/learn-about-policy-recommendations) -[Policy recommendations tutorial.](https://docs.tigera.io/calico-cloud/network-policy/recommendations/learn-about-policy-recommendations) +[Tutorial walkthrough of the Calico Cloud policy recommendations engine — what it generates, how to review it, and how to promote it to enforced.](https://docs.tigera.io/calico-cloud/network-policy/recommendations/learn-about-policy-recommendations) ### Enable policy recommendations @@ -8184,19 +8184,19 @@ Zero trust means that you do not trust anyone or anything. Calico Cloud handles ## [📄️Get started with policy tiers](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/tiered-policy) -[Understand how tiered policy works and supports microsegmentation.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/tiered-policy) +[How tiered policy works in Calico Cloud — evaluation order, pass actions, and using tiers to enforce microsegmentation across connected clusters.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/tiered-policy) ## [📄️Network policy tutorial](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/policy-tutorial-ui) -[Covers the basics of Calico Cloud network policy.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/policy-tutorial-ui) +[Tutorial for the Calico Cloud policy management UI — author, order, and stage policies inside tiers from the web console.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/policy-tutorial-ui) ## [📄️Change allow-tigera tier behavior](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/allow-tigera) -[Understand how to change the behavior of the allow-tigera tier.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/allow-tigera) +[Customize the behavior of the allow-tigera tier that Calico Cloud installs by default to keep its own components reachable.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/allow-tigera) ## [📄️Configure RBAC for tiered policies](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/rbac-tiered-policies) -[Configure RBAC to control access to policies and tiers.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/rbac-tiered-policies) +[Configure Kubernetes RBAC to control which users can edit Calico Cloud policies in each tier across connected clusters.](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/rbac-tiered-policies) ### Get started with policy tiers @@ -10243,19 +10243,19 @@ For help with Pass action rules, see [Get started with tiered policy](https://do ## [📄️Enable a default deny policy for Kubernetes pods](https://docs.tigera.io/calico-cloud/network-policy/beginners/kubernetes-default-deny) -[Create a default deny network policy so pods that are missing policy are not allowed traffic until appropriate network policy is defined.](https://docs.tigera.io/calico-cloud/network-policy/beginners/kubernetes-default-deny) +[Apply a default-deny network policy in a Calico Cloud connected cluster so unprotected pods are denied traffic until explicit policy is written.](https://docs.tigera.io/calico-cloud/network-policy/beginners/kubernetes-default-deny) ## [📄️Get started with Calico network policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-network-policy) -[Create your first Calico network policies. Shows the rich features using sample policies that extend native Kubernetes network policy.](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-network-policy) +[Write your first Calico Cloud NetworkPolicy — sample policies that exercise the rich rule features beyond Kubernetes NetworkPolicy.](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-network-policy) ## [📄️Calico Cloud automatic labels](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-labels) -[Calico Cloud automatic labels for use with resources.](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-labels) +[Reference list of automatic labels Calico Cloud attaches to resources, useful as selectors in policy rules.](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-labels) ## [📄️Calico Cloud for Kubernetes demo](https://docs.tigera.io/calico-cloud/network-policy/beginners/simple-policy-cnx) -[Learn the extra features for Calico Cloud that make it so important for production environments.](https://docs.tigera.io/calico-cloud/network-policy/beginners/simple-policy-cnx) +[Tour of the additional features Calico Cloud adds to Kubernetes policy that make it suitable for production environments.](https://docs.tigera.io/calico-cloud/network-policy/beginners/simple-policy-cnx) ## [🗃Policy rules](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/) @@ -11254,27 +11254,27 @@ Now, let's enable access to the nginx service using a NetworkPolicy. This will a ## [📄️Basic rules](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/policy-rules-overview) -[Define network connectivity for Calico endpoints using policy rules and label selectors.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/policy-rules-overview) +[How to write policy rules in Calico Cloud — label selectors, source and destination match criteria, and rule actions.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/policy-rules-overview) ## [📄️Use namespace rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/namespace-policy) -[Use namespaces and namespace selectors in Calico network policy to group or separate resources. Use network policies to allow or deny traffic to/from pods that belong to specific namespaces.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/namespace-policy) +[Group or separate workloads in Calico Cloud policy using namespaces and namespace selectors so policies apply only to specified namespaces.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/namespace-policy) ## [📄️Use service accounts rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-accounts) -[Use Kubernetes service accounts in policies to validate cryptographic identities and/or manage RBAC controlled high-priority rules across teams.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-accounts) +[Match on Kubernetes service accounts in Calico Cloud policy rules to validate workload identity and apply RBAC-controlled rules.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-accounts) ## [📄️Use service rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-policy) -[Use Kubernetes Service names in policy rules.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-policy) +[Match on Kubernetes Service names in Calico Cloud policy rules instead of specific pod selectors.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-policy) ## [📄️Use external IPs or networks rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/external-ips-policy) -[Limit egress and ingress traffic using IP address either directly within Calico network policy or managed as Calico network sets.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/external-ips-policy) +[Restrict egress and ingress to specific IP ranges in Calico Cloud policy, either inline or via reusable network sets.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/external-ips-policy) ## [📄️Use ICMP/ping rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/icmp-ping) -[Control where ICMP/ping is used by creating a Calico network policy to allow and deny ICMP/ping messages for workloads and host endpoints.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/icmp-ping) +[Allow or deny ICMP and ping traffic for Calico Cloud workloads and host endpoints using policy rules.](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/icmp-ping) ### Basic rules @@ -12077,11 +12077,11 @@ For more on the ICMP match criteria, see: ## [📄️Apply Calico Cloud policy to Kubernetes node ports](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/kubernetes-node-ports) -[Restrict access to Kubernetes node ports using Calico Cloud global network policy. Follow the steps to secure the host, the node ports, and the cluster.](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/kubernetes-node-ports) +[Restrict access to Kubernetes NodePort services using a Calico Cloud GlobalNetworkPolicy at the host endpoint.](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/kubernetes-node-ports) ## [📄️Apply Calico Cloud policy to services exposed externally as cluster IPs](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/services-cluster-ips) -[Expose Kubernetes service cluster IPs over BGP using Calico Cloud, and restrict who can access them using Calico Cloud network policy.](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/services-cluster-ips) +[Expose Kubernetes Service ClusterIPs over BGP using Calico Cloud and restrict who can reach them with network policy.](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/services-cluster-ips) ### Apply Calico Cloud policy to Kubernetes node ports @@ -12843,11 +12843,11 @@ For more detail about the relevant resources, see [GlobalNetworkSet](https://doc ## [📄️Enable and enforce application layer policies](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/alp) -[Enforce application layer policies in your cluster to configure access controls based on L7 attributes.](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/alp) +[Configure access controls based on Layer-7 attributes by enforcing Calico Cloud application-layer policy in connected clusters.](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/alp) ## [📄️Application layer policy tutorial](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/alp-tutorial) -[Learn how to apply ALP to your workloads and control ingress traffic.](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/alp-tutorial) +[Step-by-step tutorial for applying Calico Cloud application-layer policy to workloads in a connected cluster — control ingress traffic by HTTP attributes.](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/alp-tutorial) ### Enable and enforce application layer policies @@ -13162,15 +13162,15 @@ We omitted the JSON formatting because we do not expect to get a valid JSON resp ## [📄️Determine the best Calico Cloud/Fortinet solution](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/overview) -[Learn how to integrate Kubernetes clusters with existing Fortinet firewall workflows using Calico Cloud.](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/overview) +[Integrate Kubernetes clusters with existing Fortinet firewall workflows using Calico Cloud — architecture, components, and what each side enforces.](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/overview) ## [📄️Extend Kubernetes to Fortinet firewall devices](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/firewall-integration) -[Enable FortiGate firewalls to control traffic from Kubernetes workloads.](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/firewall-integration) +[Use a FortiGate firewall to control egress traffic from Kubernetes workloads in a Calico Cloud connected cluster.](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/firewall-integration) ## [📄️Extend FortiManager firewall policies to Kubernetes](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration) -[Extend FortiManager firewall policies to Kubernetes with Calico Cloud](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration) +[Extend FortiManager firewall policies into Kubernetes workloads in a Calico Cloud connected cluster.](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration) ### Determine the best Calico Cloud/Fortinet solution @@ -14421,11 +14421,11 @@ For preDNAT policies, flow logs display the original destination IP and port bef ## [📄️Enable extreme high-connection workloads](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/high-connection-workloads) -[Create a Calico network policy rule to bypass Linux conntrack for traffic to workloads that experience extremely large number of connections.](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/high-connection-workloads) +[Bypass Linux conntrack with a Calico Cloud policy rule for workloads that handle an extreme number of concurrent connections.](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/high-connection-workloads) ## [📄️Defend against DoS attacks](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/defend-dos-attack) -[Define DoS mitigation rules in Calico Cloud policy to quickly drop connections when under attack. Learn how rules use eBPF and XDP, including hardware offload when available.](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/defend-dos-attack) +[Define DoS mitigation rules in Calico Cloud policy that drop connections at the eBPF or XDP layer, with hardware offload when available.](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/defend-dos-attack) ### Enable extreme high-connection workloads @@ -19150,9 +19150,9 @@ To enable WAF on a Calico Ingress Gateway: To deploy a WAF on multiple gateways, you must create a separate `EnvoyExtensionPolicy` resource for each `Gateway` resource. Each `EnvoyExtensionPolicy` must reference the same `tigera-waf-backend` backend. -3. To verify that the WAF is enabled for your gateway, you can simulate an SQL injection attack through your gateway and see whether it triggers a security event. +3. To verify that the WAF is enabled for your gateway, you can simulate an SQL injection attack through your gateway and check that the WAF logs the request. - > **SECONDARY:** The query string in this example has some SQL syntax embedded in the text. This is harmless and for demo purposes, but WAF will detect this pattern and create an WAF log for this HTTP request. + > **SECONDARY:** The query string in this example has some SQL syntax embedded in the text. This is harmless and for demo purposes, but WAF will detect this pattern and create a WAF log for this HTTP request. By design, Calico Ingress Gateway WAF emits WAF logs rather than security events. 1. Get the service IP of your gateway: @@ -19160,13 +19160,13 @@ To enable WAF on a Calico Ingress Gateway: export GATEWAY_HOST=$(kubectl get gateway/ -o jsonpath='{.status.addresses[0].value}') ``` - 2. Trigger a security event by simulating an SQL injection attack on that service IP: + 2. Simulate an SQL injection attack on that service IP: ```bash curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/?artist=0+div+1+union%23foo*%2F*bar%0D%0Aselect%23foo%0D%0A1%2C2%2Ccurrent_user ``` - 3. From the web console, go to **Threat > Security Events** and check for a security event with corresponds with this simulated SQL injection attack. + 3. In Kibana, select the `tigera_secure_ee_waf*` index pattern and look for a WAF log that corresponds with this request. If blocking mode is enabled, the `curl` command also returns a `403 Forbidden` response. ## Customizing your WAF configuration for an ingress gateway[​](#customizing-your-waf-configuration-for-an-ingress-gateway) @@ -20366,6 +20366,225 @@ Get reports on Kubernetes workloads and environments for regulatory compliance. [Configure an HTTP proxy to use for connections that leave the cluster](https://docs.tigera.io/calico-cloud/compliance/configure-http-proxy) +### Istio Ambient Mode + +You can use Calico Cloud to deploy and manage an Istio service mesh on your cluster. Calico Cloud installs Istio in ambient mode, which conserves resources while providing the same robust mTLS encryption for your workloads. + +> **SECONDARY:** Istio Ambient Mode is a tech preview feature. Tech preview features are subject to significant changes before they become GA. + +## About Istio Ambient Mode[​](#about-istio-ambient-mode) + +Istio is a service mesh that manages and secures communication between microservices. Typically, Istio uses sidecar proxies that are deployed alongside every pod in the service mesh. At scale, running these sidecar proxies can be difficult to manage and a drain on resources. + +Istio Ambient Mode is a simplified service mesh architecture that removes the need for a sidecar proxy next to every pod. Instead, it uses node-level components for shared security and a layered approach for advanced traffic management. This design saves on computing resources and simplifies operations. + +## About Istio Ambient Mode on Calico[​](#about-istio-ambient-mode-on-calico) + +Calico Cloud provides a bundled version of Istio that can be installed and managed by the Tigera Operator. + +This integration automates the lifecycle of the Istio components to reduce manual configuration overhead. CVEs are addressed as part of the regular Calico Cloud patch release cadence. Administrators provision the Istio service mesh by defining a standard `Istio` custom resource. + +### The enhanced zTunnel proxy[​](#the-enhanced-ztunnel-proxy) + +The zTunnel component in Istio Ambient Mode is a lightweight proxy that runs on every node. + +Its main job is to handle encryption, authentication, and policy enforcement for traffic at Layer 4. + +A challenge in the original Istio Ambient Mode is that when traffic is routed through the zTunnel, it gets placed into a tunnel on a specific port (15008). This change makes it impossible for existing Layer 3 or Layer 4 network policies (like those from Calico) to see the original destination port of the traffic. + +Calico addresses this by using an enhanced zTunnel that is modified to preserve the original destination port. This modification allows existing Calico and Kubernetes network policies to continue functioning exactly as they did before, without needing any rewrites, even though the traffic is now encrypted with mTLS. + +These zTunnel enhancements are not compatible with Istio's application-layer Waypoint proxy. If you deploy Waypoint, the reported destination ports will follow the original behavior. Existing network policies need to be adapted to allow communication to port 15008. + +## Additional resources[​](#additional-resources) + +- [Overview of Istio ambient mode](https://istio.io/latest/docs/ambient/overview/). +- [Ambient and Kubernetes NetworkPolicy](https://istio.io/latest/docs/ambient/usage/networkpolicy/) + +### Deploy Istio Ambient Mode on your cluster + +You can deploy Calico's bundled version of Istio in ambient mode to provide mTLS encryption to your workloads. + +> **SECONDARY:** Istio Ambient Mode is a tech preview feature. Tech preview features are subject to significant changes before they become GA. + +## Limitations[​](#limitations) + +- [Application layer network policies](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/alp) are not compatible with the Istio service mesh. +- Istio Ambient Mode does not work together with [workload-based web application firewalls](https://docs.tigera.io/calico-cloud/threat/web-application-firewall). +- The service mesh is not supported for use on clusters that are also part of a [cluster mesh](https://docs.tigera.io/calico-cloud/multicluster/). +- Destination ports are preserved only when Istio is deployed without Waypoint. If you deploy Waypoint, all traffic through Waypoint will show port 15008 as its destination port. +- Connect-time load balancing is not compatible with Istio Ambient Mode. + +## Prerequisites[​](#prerequisites) + +- Calico Cloud is installed and managed by the Tigera Operator. + +## Install Istio in ambient mode on your cluster[​](#install-istio-in-ambient-mode-on-your-cluster) + +You can create an Istio service mesh in ambient mode by creating the `Istio` custom resource. + +- To install Istio in ambient mode, apply the `Istio` custom resource to your cluster: + + ```bash + cat < **SECONDARY:** To customize resource requirements for your Istio deployment, see the options available in the [installation API documentation](https://docs.tigera.io/calico-cloud/reference/installation/api). + + To verify the installation: + + ```bash + kubectl get tigerastatus + ``` + + Example output + + ```shell + NAME AVAILABLE PROGRESSING DEGRADED SINCE + + apiserver True False False 9m59s + + calico True False False 9m4s + + intrusion-detection True False False 5m39s + + ippools True False False 10m + + istio True False False 19s + + log-collector True False False 8m34s + + log-storage True False False 10m + + log-storage-access True False False 4m24s + + log-storage-dashboards True False False 4m58s + + log-storage-elastic True False False 5m4s + + log-storage-esmetrics True False False 4m54s + + log-storage-kubecontrollers True False False 5m9s + + log-storage-secrets True False False 10m + + manager True False False 8m24s + + monitor True False False 9m44s + + policy-recommendation True False False 9m24s + + tiers True False False 9m44s + ``` + + Now you can add your workloads to the Istio service mesh. + +## Add a workload to the Istio service mesh[​](#add-a-workload-to-the-istio-service-mesh) + +You can add workloads to the mesh by labeling them. Communication between labelled namespaces and pods goes through the mesh and uses mTLS encryption. + +> **WARNING:** Don't label Calico Cloud resources to add them to the service mesh. Doing this can cause interruptions and failure to your cluster network. +> +> If you want to secure Calico Cloud components, see [Secure Calico component communications](https://docs.tigera.io/calico-cloud/operations/comms/). + +1. To add workloads to your Istio service mesh, add the `istio.io/dataplane-mode=ambient` label to a pod or namespace resource: + + Adding a namespace to the Istio service mesh + + ```bash + kubectl label namespace istio.io/dataplane-mode=ambient + ``` + + Replace `` with the namespace you want to include in the mesh. + + Adding a pod to the Istio service mesh + + ```bash + kubectl label pod --namespace= istio.io/dataplane-mode=ambient + ``` + + Replace the following: + + - ``: The name of the pod you want to include in the mesh. + - ``: The namespace your pod is in. + +## Removing Istio[​](#removing-istio) + +If you want to remove Istio, first remove the labels you applied to pods and namespaces. When that's done, you can delete the `Istio` custom resource. + +1. Remove the label from namespaces and pods by running the following commands: + + ```bash + kubectl label namespaces --all istio.io/dataplane-mode=ambient- + + kubectl label pods --all --all-namespaces istio.io/dataplane-mode=ambient- + ``` + +2. Remove the `Istio` custom resource: + + ```bash + kubectl delete istio.operator.tigera.io default + ``` + +## Troubleshooting commands[​](#troubleshooting-commands) + +Check whether Istio pods are deployed: + +```bash +kubectl get pods -n calico-system | grep 'istio\|ztunnel' +``` + +Check whether Istio CRDs are deployed: + +```bash +kubectl get crd | grep istio +``` + +Check which pods and namespaces are in the mesh: + +- Requires [istioctl](https://istio.io/latest/docs/ops/diagnostic-tools/istioctl/). + +```bash +istioctl ztunnel-config workloads -n calico-system +``` + +Check for errors logged by the zTunnel component: + +```bash +ZTUNNEL_PODS=$(kubectl get pod -n calico-system \ + + -l app.kubernetes.io/name=ztunnel \ + + -o jsonpath='{.items[*].metadata.name}') + +for P in $ZTUNNEL_PODS; do + + echo "--- Checking logs for pod: $P ---" + + kubectl logs $P -n calico-system 2>/dev/null | \ + + grep -i error | \ + + grep -i app1 + +done +``` + +## Additional resources[​](#additional-resources) + +- [Overview of Istio ambient mode](https://istio.io/latest/docs/ambient/overview/). +- [Configuration options](https://docs.tigera.io/calico-cloud/reference/installation/api). + ### Enable compliance reports > **WARNING:** Compliance reports are deprecated and will be removed in a future release. We're building a new compliance reporting system that will eventually replace the current one. @@ -21756,13 +21975,17 @@ You can specify core configuration elements of your ingress gateway by specifyin Many customizations are available for the `GatewayAPI` resource. This resource has fields that allow some aspects of Gateway deployments to be customized. For example: -- `spec.gatewayDeployment.spec.template.metadata` allows arbitrary labels or annotations to be added to the pod that is created to implement each configured Gateway. -- `spec.gatewayDeployment.spec.template.spec.nodeSelector` allows control over where gateway implementation pods are scheduled. -- `spec.gatewayDeployment.spec.template.spec.containers` allows control over the memory and CPU that the gateway implementation pods can use. - `spec.gatewayControllerDeployment.spec.template.spec.nodeSelector` allows control over where the gateway controller is scheduled. -- `spec.gatewayDeployment.service.metadata.annotations` allows control over annotations to place on the service that is provisioned for each Gateway; these can be used, for example, to configure the cloud-specific type and properties of the associated external load balancer. -- `spec.gatewayDeployment.service.spec.*loadbalancer*` allows control over the corresponding `*loadbalancer*` fields in the service that is provisioned for each gateway; in some clouds these can also be used to configure the type and properties of the associated external load balancer. -- `spec.gatewayClasses` allows the provisioning of multiple `GatewayClass` resources, each with its own set of customizations, instead of the single `tigera-gateway-class` gateway class that the Tigera Operator provisions by default. + +- `spec.gatewayClasses` allows the provisioning of multiple `GatewayClass` resources, each with its own set of customizations, instead of the single `tigera-gateway-class` gateway class that the Tigera Operator provisions by default. Possible `GatewayClass` customizations include the following: + + + + - `gatewayDeployment.spec.template.metadata` allows arbitrary labels or annotations to be added to the pod that is created to implement each Gateway within that class. + - `gatewayDeployment.spec.template.spec.nodeSelector` allows control over where gateway implementation pods are scheduled. + - `gatewayDeployment.spec.template.spec.containers` allows control over the memory and CPU that the gateway implementation pods can use. + - `gatewayService.metadata.annotations` allows control over annotations to place on the service that is provisioned for each Gateway; these can be used, for example, to configure the cloud-specific type and properties of the associated external load balancer. + - `gatewayService.spec.*loadBalancer*` allows control over the corresponding `*loadBalancer*` fields in the service that is provisioned for each gateway; in some clouds these can also be used to configure the type and properties of the associated external load balancer. For full details, see [the `GatewayAPI` reference documentation](https://docs.tigera.io/calico-cloud/reference/installation/api#gatewayapi). @@ -62579,6 +62802,14 @@ To configure notifications, click the user icon **> Notifications**. ### New features and enhancements[​](#new-features-and-enhancements-1) +#### Istio Ambient Mode (tech preview)[​](#istio-ambient-mode-tech-preview) + +Calico Cloud now provides a bundled version of Istio in ambient mode, a sidecarless architecture that delivers robust mTLS encryption and service mesh security while significantly reducing resource consumption and operational overhead. This implementation, managed by the Tigera Operator, features an enhanced zTunnel proxy that preserves original destination ports so existing Calico and Kubernetes network policies continue to function seamlessly without requiring rewrites. + +For more information, see [Istio Ambient Mode](https://docs.tigera.io/calico-cloud/compliance/istio/about-istio-ambient). + +#### Enhancements[​](#enhancements) + - Enhancements to to Calico Cloud dashboards. ## February 5, 2026 (web console update) @@ -62613,7 +62844,7 @@ Calico Cloud's built-in observability dashboards are now generally available. Th For more information, see [Dashboards](https://docs.tigera.io/calico-cloud/observability/dashboards). -#### Enhancements[​](#enhancements) +#### Enhancements[​](#enhancements-1) - Added various user experience improvements to dashboards in the web console. @@ -62687,7 +62918,7 @@ This release includes support for HTTP header-based matching for application lay For more information, see [Global network policy](https://docs.tigera.io/calico-cloud/reference/resources/globalnetworkpolicy#httpheadermatch). -#### Enhancements[​](#enhancements-1) +#### Enhancements[​](#enhancements-2) - To support a minimal footprint and simplify resource management, the API server component and its associated resources have been moved from the `tigera-system` namespace to the `calico-system` namespace. @@ -62743,7 +62974,7 @@ For more information, see [Webhooks for security event alerts](https://docs.tige You can now use the web console to view details of the default rule set used by Web Application Firewall. From the **Web Application Firewall** page, click the **Rulesets** tab to open a list of default rules. -#### Enhancements[​](#enhancements-2) +#### Enhancements[​](#enhancements-3) - Added web console support for `AdminNetworkPolicy` and `BaseAdminNetworkPolicy` tiers (view-only). - Performance and user experience improvements to custom dashboards. @@ -62865,7 +63096,7 @@ Calico Cloud now extends its IPAM capabilities to support service LoadBalancer I For more information, see [LoadBalancer IP address management](https://docs.tigera.io/calico-cloud/networking/ipam/service-loadbalancer). -#### Enhancements[​](#enhancements-3) +#### Enhancements[​](#enhancements-4) - We improved how the web console performs in cases where there are a large number of policy tiers for managed clusters. To see these optimizations, make sure your managed clusters are running Calico Cloud 21.0.0 or later. @@ -62966,7 +63197,7 @@ Calico Cloud includes improvements so that early network configuration will be s For more information see [Deploy a dual ToR cluster](https://docs.tigera.io/calico-cloud/networking/configuring/dual-tor). -#### Enhancements[​](#enhancements-4) +#### Enhancements[​](#enhancements-5) - Guardian will respect HTTP proxy environment variables when set on the deployment by mutating webhook configurations. - Enhanced filtering options in the endpoints page of the web console. @@ -62988,7 +63219,7 @@ For more information see [Deploy a dual ToR cluster](https://docs.tigera.io/cali In this release, you can more easily manage your Image Assurance scan results by deleting results you don't need. On the **All Scan Results** page, select the checkbox next to result item, and then click **Actions > Delete**. You can also select multiple results and delete them as a bulk action. -#### Enhancements[​](#enhancements-5) +#### Enhancements[​](#enhancements-6) - Various detector improvements, including better handling of historical data and a detector export function. - Improved webhooks with ability to send global alerts. @@ -63018,7 +63249,7 @@ For more information, see [Exclude a process from Security Events alerts](https: Image Assurance scans results now include information using the [Exploit Prediction Scoring System (EPSS)](https://www.first.org/epss/). EPSS scores help you determine the likelihood that a given vulnerability will be exploited in the near future. Being able to view this information and filter scan results by EPSS score can help you judge the risk of vulnerabilities and prioritize your remediation efforts. -#### Enhancements[​](#enhancements-6) +#### Enhancements[​](#enhancements-7) - Functional and performance improvements to Image Assurance scan results filtering. @@ -63315,7 +63546,7 @@ We've added a new default mode for WAF that is monitor/event only. This allows o For more information, see [Web application firewall](https://docs.tigera.io/calico-cloud/threat/web-application-firewall). -#### Enhancements[​](#enhancements-7) +#### Enhancements[​](#enhancements-8) - You can now remove disconnected clusters from the list of managed clusters. See [Cluster management](https://docs.tigera.io/calico-cloud/operations/cluster-management#remove-a-cluster). diff --git a/static/calico-cloud/llms.txt b/static/calico-cloud/llms.txt index ebf4c20b64..056dbc8fe5 100644 --- a/static/calico-cloud/llms.txt +++ b/static/calico-cloud/llms.txt @@ -9,27 +9,27 @@ ## Calico Cloud Free Tier -- [Calico Cloud Free Tier](https://docs.tigera.io/calico-cloud/free/): Placeholder description. -- [Calico Cloud Free Tier](https://docs.tigera.io/calico-cloud/free/overview): Overview of Calico Cloud Free Tier -- [Calico Cloud Free Tier quickstart guide](https://docs.tigera.io/calico-cloud/free/quickstart): Quickstart guide for Calico Cloud Free Tier. -- [Connect a cluster to Calico Cloud Free Tier](https://docs.tigera.io/calico-cloud/free/connect-cluster-free): Securely connect your cluster to Calico Cloud Free Tier to access centralized network observability for your Kubernetes deployment. -- [Remove a cluster from Calico Cloud Free Tier](https://docs.tigera.io/calico-cloud/free/disconnect-cluster-free): Disconnect and remove your cluster from Calico Cloud Free Tier +- [Calico Cloud Free Tier](https://docs.tigera.io/calico-cloud/free/): Connect a Kubernetes cluster to Calico Cloud Free Tier — a no-cost path to centralized network observability without a paid Calico Cloud subscription. +- [Calico Cloud Free Tier](https://docs.tigera.io/calico-cloud/free/overview): What Calico Cloud Free Tier includes and excludes versus paid Calico Cloud — limits, supported platforms, and the upgrade path to a paid subscription. +- [Calico Cloud Free Tier quickstart guide](https://docs.tigera.io/calico-cloud/free/quickstart): Quickstart that connects a Kubernetes cluster to Calico Cloud Free Tier for centralized network observability — no payment or trial required. +- [Connect a cluster to Calico Cloud Free Tier](https://docs.tigera.io/calico-cloud/free/connect-cluster-free): Connect a Kubernetes cluster to Calico Cloud Free Tier so it reports network observability data to a Tigera-managed dashboard. +- [Remove a cluster from Calico Cloud Free Tier](https://docs.tigera.io/calico-cloud/free/disconnect-cluster-free): Disconnect a Kubernetes cluster from Calico Cloud Free Tier and remove the Calico Cloud components it installed. ## Install and upgrade -- [Install and upgrade](https://docs.tigera.io/calico-cloud/get-started/): Steps to connect clusters to Calico Cloud and upgrade. -- [Calico Cloud architecture](https://docs.tigera.io/calico-cloud/get-started/cc-arch-diagram): Understand the main components of Calico Cloud. -- [What happens when you connect a cluster to Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/connect-cluster): Get answers to your questions about connecting to Calico Cloud. -- [System requirements](https://docs.tigera.io/calico-cloud/get-started/system-requirements): Review cluster requirements to connect to Calico Cloud. -- [Prepare your cluster for Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/prepare-cluster): Prepare your cluster to install Calico Cloud. -- [Connect a cluster to Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/install-cluster): Steps to connect your cluster to Calico Cloud. -- [Connect a cluster to Calico Cloud using a private registry](https://docs.tigera.io/calico-cloud/get-started/install-private-registry): Steps to connect your cluster to Calico Cloud. -- [Install Calico Cloud as part of an automated workflow](https://docs.tigera.io/calico-cloud/get-started/install-automated): Install Calico Cloud as part of an automated workflow. -- [Set up a private registry](https://docs.tigera.io/calico-cloud/get-started/setup-private-registry): Add images to a private registry for installing Calico Cloud on a cluster. -- [Limitations and known issues for Windows nodes](https://docs.tigera.io/calico-cloud/get-started/windows-limitations): Review limitations before starting installation. -- [Troubleshooting checklist](https://docs.tigera.io/calico-cloud/get-started/checklist): Review this checklist before opening a Support ticket. -- [Tigera Operator troubleshooting checklist](https://docs.tigera.io/calico-cloud/get-started/operator-checklist): Additional troubleshooting for the Tigera Operator. -- [Upgrade Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/upgrade-cluster): Steps to upgrade to the latest version of Calico Cloud. +- [Install and upgrade](https://docs.tigera.io/calico-cloud/get-started/): Connect Kubernetes clusters to Calico Cloud and upgrade them — the entry point for the Calico Cloud onboarding flow. +- [Calico Cloud architecture](https://docs.tigera.io/calico-cloud/get-started/cc-arch-diagram): Architecture overview of Calico Cloud — components that run in the connected cluster and the SaaS-side services they communicate with. +- [What happens when you connect a cluster to Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/connect-cluster): What happens when you connect a Kubernetes cluster to Calico Cloud — what is installed, what data leaves the cluster, and what changes in the cluster. +- [System requirements](https://docs.tigera.io/calico-cloud/get-started/system-requirements): Cluster, platform, and version requirements a Kubernetes cluster must meet before it can connect to Calico Cloud. +- [Prepare your cluster for Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/prepare-cluster): Prepare a Kubernetes cluster for connection to Calico Cloud — pre-flight checks, RBAC, and image-pull configuration. +- [Connect a cluster to Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/install-cluster): Connect a Kubernetes cluster to Calico Cloud using the standard install command from the management UI. +- [Connect a cluster to Calico Cloud using a private registry](https://docs.tigera.io/calico-cloud/get-started/install-private-registry): Connect a Kubernetes cluster to Calico Cloud when its container images must be pulled from a private registry. +- [Install Calico Cloud as part of an automated workflow](https://docs.tigera.io/calico-cloud/get-started/install-automated): Connect a Kubernetes cluster to Calico Cloud as part of an automated CI or provisioning workflow rather than the interactive UI flow. +- [Set up a private registry](https://docs.tigera.io/calico-cloud/get-started/setup-private-registry): Mirror Calico Cloud container images into a private registry so air-gapped clusters can install without reaching the public registry. +- [Limitations and known issues for Windows nodes](https://docs.tigera.io/calico-cloud/get-started/windows-limitations): Known limitations for Calico Cloud on Windows worker nodes that you should review before planning a connection. +- [Troubleshooting checklist](https://docs.tigera.io/calico-cloud/get-started/checklist): Gather information and run pre-flight checks before opening a Calico Cloud support ticket so triage moves quickly. +- [Tigera Operator troubleshooting checklist](https://docs.tigera.io/calico-cloud/get-started/operator-checklist): Troubleshoot the Tigera Operator on Calico Cloud connected clusters when the standard support checklist is not enough. +- [Upgrade Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/upgrade-cluster): Upgrade a connected Calico Cloud cluster to the latest released version of the in-cluster components. ## Users @@ -68,49 +68,49 @@ ## Network policy -- [Network policy](https://docs.tigera.io/calico-cloud/network-policy/): Calico Cloud Network Policy and Calico Cloud Global Network Policy are the fundamental resources to secure workloads and hosts, and to adopt a zero trust security model. -- [Policy recommendations](https://docs.tigera.io/calico-cloud/network-policy/recommendations/): Enable policy recommendations for namespaces to improve your security posture. -- [Enable policy recommendations](https://docs.tigera.io/calico-cloud/network-policy/recommendations/policy-recommendations): Enable continuous policy recommendations to secure unprotected namespaces or workloads. -- [Policy recommendations tutorial](https://docs.tigera.io/calico-cloud/network-policy/recommendations/learn-about-policy-recommendations): Policy recommendations tutorial. -- [Policy best practices](https://docs.tigera.io/calico-cloud/network-policy/policy-best-practices): Learn policy best practices for security, scalability, and performance. -- [Policy tiers](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/): Learn how policy tiers allow diverse teams to securely manage Kubernetes policy. -- [Get started with policy tiers](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/tiered-policy): Understand how tiered policy works and supports microsegmentation. -- [Network policy tutorial](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/policy-tutorial-ui): Covers the basics of Calico Cloud network policy. -- [Change allow-tigera tier behavior](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/allow-tigera): Understand how to change the behavior of the allow-tigera tier. -- [Configure RBAC for tiered policies](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/rbac-tiered-policies): Configure RBAC to control access to policies and tiers. -- [Get started with network sets](https://docs.tigera.io/calico-cloud/network-policy/networksets): Learn the power of network sets and why you should create them. -- [Global default deny policy best practices](https://docs.tigera.io/calico-cloud/network-policy/default-deny): Implement a global default deny policy in the default tier to block unwanted traffic. -- [Stage, preview impacts, and enforce policy](https://docs.tigera.io/calico-cloud/network-policy/staged-network-policies): Stage and preview policies to observe traffic implications before enforcing them. -- [Troubleshoot policies](https://docs.tigera.io/calico-cloud/network-policy/policy-troubleshooting): Common policy implementation problems. -- [Calico Cloud network policy for beginners](https://docs.tigera.io/calico-cloud/network-policy/beginners/): Learn how to create your first Calico Cloud network policy. -- [Enable a default deny policy for Kubernetes pods](https://docs.tigera.io/calico-cloud/network-policy/beginners/kubernetes-default-deny): Create a default deny network policy so pods that are missing policy are not allowed traffic until appropriate network policy is defined. -- [Get started with Calico network policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-network-policy): Create your first Calico network policies. Shows the rich features using sample policies that extend native Kubernetes network policy. -- [Calico Cloud automatic labels](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-labels): Calico Cloud automatic labels for use with resources. -- [Calico Cloud for Kubernetes demo](https://docs.tigera.io/calico-cloud/network-policy/beginners/simple-policy-cnx): Learn the extra features for Calico Cloud that make it so important for production environments. -- [Policy rules](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/): Control traffic to/from endpoints using Calico network policy rules. -- [Basic rules](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/policy-rules-overview): Define network connectivity for Calico endpoints using policy rules and label selectors. -- [Use namespace rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/namespace-policy): Use namespaces and namespace selectors in Calico network policy to group or separate resources. Use network policies to allow or deny traffic to/from pods that belong to specific namespaces. -- [Use service accounts rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-accounts): Use Kubernetes service accounts in policies to validate cryptographic identities and/or manage RBAC controlled high-priority rules across teams. -- [Use service rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-policy): Use Kubernetes Service names in policy rules. -- [Use external IPs or networks rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/external-ips-policy): Limit egress and ingress traffic using IP address either directly within Calico network policy or managed as Calico network sets. -- [Use ICMP/ping rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/icmp-ping): Control where ICMP/ping is used by creating a Calico network policy to allow and deny ICMP/ping messages for workloads and host endpoints. -- [Policy for Kubernetes services](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/): Apply Calico policy to Kubernetes node ports, and to services that are exposed externally as cluster IPs. -- [Apply Calico Cloud policy to Kubernetes node ports](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/kubernetes-node-ports): Restrict access to Kubernetes node ports using Calico Cloud global network policy. Follow the steps to secure the host, the node ports, and the cluster. -- [Apply Calico Cloud policy to services exposed externally as cluster IPs](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/services-cluster-ips): Expose Kubernetes service cluster IPs over BGP using Calico Cloud, and restrict who can access them using Calico Cloud network policy. -- [DNS policy](https://docs.tigera.io/calico-cloud/network-policy/domain-based-policy): Use domain names to allow traffic to destinations outside of a cluster by their DNS names instead of by their IP addresses. -- [Application layer policies to control ingress traffic](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/): Use application layer policies to restrict ingress traffic based on HTTP attributes. -- [Enable and enforce application layer policies](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/alp): Enforce application layer policies in your cluster to configure access controls based on L7 attributes. -- [Application layer policy tutorial](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/alp-tutorial): Learn how to apply ALP to your workloads and control ingress traffic. -- [Policy for firewalls](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/): Use Calico Cloud policy with existing firewalls. -- [Fortinet firewall integrations](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/): Calico Cloud Fortinet firewall integrations. -- [Determine the best Calico Cloud/Fortinet solution](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/overview): Learn how to integrate Kubernetes clusters with existing Fortinet firewall workflows using Calico Cloud. -- [Extend Kubernetes to Fortinet firewall devices](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/firewall-integration): Enable FortiGate firewalls to control traffic from Kubernetes workloads. -- [Extend FortiManager firewall policies to Kubernetes](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration): Extend FortiManager firewall policies to Kubernetes with Calico Cloud -- [Protect Kubernetes nodes](https://docs.tigera.io/calico-cloud/network-policy/hosts/kubernetes-nodes): Protect Kubernetes nodes with host endpoints managed by Calico Cloud. -- [Apply policy to forwarded traffic](https://docs.tigera.io/calico-cloud/network-policy/hosts/host-forwarded-traffic): Apply Calico Cloud network policy to traffic being forward by hosts acting as routers or NAT gateways. -- [Policy for extreme traffic](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/): Use Calico network policy early in the Linux packet processing pipeline to handle extreme traffic scenarios. -- [Enable extreme high-connection workloads](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/high-connection-workloads): Create a Calico network policy rule to bypass Linux conntrack for traffic to workloads that experience extremely large number of connections. -- [Defend against DoS attacks](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/defend-dos-attack): Define DoS mitigation rules in Calico Cloud policy to quickly drop connections when under attack. Learn how rules use eBPF and XDP, including hardware offload when available. +- [Network policy](https://docs.tigera.io/calico-cloud/network-policy/): Secure Kubernetes workloads and hosts in connected clusters with Calico Cloud network policy — managed enforcement, tiers, recommendations, and observability. +- [Policy recommendations](https://docs.tigera.io/calico-cloud/network-policy/recommendations/): Use Calico Cloud policy recommendations to generate baseline network policy for unprotected namespaces from observed flow logs in connected clusters. +- [Enable policy recommendations](https://docs.tigera.io/calico-cloud/network-policy/recommendations/policy-recommendations): Run continuous Calico Cloud policy recommendations so unprotected namespaces and workloads pick up baseline policy automatically. +- [Policy recommendations tutorial](https://docs.tigera.io/calico-cloud/network-policy/recommendations/learn-about-policy-recommendations): Tutorial walkthrough of the Calico Cloud policy recommendations engine — what it generates, how to review it, and how to promote it to enforced. +- [Policy best practices](https://docs.tigera.io/calico-cloud/network-policy/policy-best-practices): Best practices for Calico Cloud policy across connected clusters — security posture, scalability with tiers, and performance tuning under load. +- [Policy tiers](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/): Use Calico Cloud policy tiers to let platform, security, and app teams author and order policy independently across connected clusters. +- [Get started with policy tiers](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/tiered-policy): How tiered policy works in Calico Cloud — evaluation order, pass actions, and using tiers to enforce microsegmentation across connected clusters. +- [Network policy tutorial](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/policy-tutorial-ui): Tutorial for the Calico Cloud policy management UI — author, order, and stage policies inside tiers from the web console. +- [Change allow-tigera tier behavior](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/allow-tigera): Customize the behavior of the allow-tigera tier that Calico Cloud installs by default to keep its own components reachable. +- [Configure RBAC for tiered policies](https://docs.tigera.io/calico-cloud/network-policy/policy-tiers/rbac-tiered-policies): Configure Kubernetes RBAC to control which users can edit Calico Cloud policies in each tier across connected clusters. +- [Get started with network sets](https://docs.tigera.io/calico-cloud/network-policy/networksets): Use Calico Cloud network sets to package frequently reused IP ranges or domains into named selectors that policies can reference across connected clusters. +- [Global default deny policy best practices](https://docs.tigera.io/calico-cloud/network-policy/default-deny): Deploy a global default-deny policy in the Calico Cloud default tier across connected clusters so unprotected workloads are blocked until policy is written. +- [Stage, preview impacts, and enforce policy](https://docs.tigera.io/calico-cloud/network-policy/staged-network-policies): Stage and preview Calico Cloud network policies from the web console to observe traffic impact across connected clusters before enforcing. +- [Troubleshoot policies](https://docs.tigera.io/calico-cloud/network-policy/policy-troubleshooting): Troubleshooting guide for Calico Cloud policy implementation problems in connected clusters — denied traffic, missing rules, and tier-evaluation surprises. +- [Calico Cloud network policy for beginners](https://docs.tigera.io/calico-cloud/network-policy/beginners/): Beginner-friendly path for writing your first Calico Cloud network policies — a tour of the basic resource types and rule patterns. +- [Enable a default deny policy for Kubernetes pods](https://docs.tigera.io/calico-cloud/network-policy/beginners/kubernetes-default-deny): Apply a default-deny network policy in a Calico Cloud connected cluster so unprotected pods are denied traffic until explicit policy is written. +- [Get started with Calico network policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-network-policy): Write your first Calico Cloud NetworkPolicy — sample policies that exercise the rich rule features beyond Kubernetes NetworkPolicy. +- [Calico Cloud automatic labels](https://docs.tigera.io/calico-cloud/network-policy/beginners/calico-labels): Reference list of automatic labels Calico Cloud attaches to resources, useful as selectors in policy rules. +- [Calico Cloud for Kubernetes demo](https://docs.tigera.io/calico-cloud/network-policy/beginners/simple-policy-cnx): Tour of the additional features Calico Cloud adds to Kubernetes policy that make it suitable for production environments. +- [Policy rules](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/): Control traffic to and from endpoints using Calico Cloud network policy rules — selectors, actions, and egress/ingress directions. +- [Basic rules](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/policy-rules-overview): How to write policy rules in Calico Cloud — label selectors, source and destination match criteria, and rule actions. +- [Use namespace rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/namespace-policy): Group or separate workloads in Calico Cloud policy using namespaces and namespace selectors so policies apply only to specified namespaces. +- [Use service accounts rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-accounts): Match on Kubernetes service accounts in Calico Cloud policy rules to validate workload identity and apply RBAC-controlled rules. +- [Use service rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/service-policy): Match on Kubernetes Service names in Calico Cloud policy rules instead of specific pod selectors. +- [Use external IPs or networks rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/external-ips-policy): Restrict egress and ingress to specific IP ranges in Calico Cloud policy, either inline or via reusable network sets. +- [Use ICMP/ping rules in policy](https://docs.tigera.io/calico-cloud/network-policy/beginners/policy-rules/icmp-ping): Allow or deny ICMP and ping traffic for Calico Cloud workloads and host endpoints using policy rules. +- [Policy for Kubernetes services](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/): Apply Calico Cloud policy to Kubernetes Services — node ports, ClusterIPs, and externally exposed services. +- [Apply Calico Cloud policy to Kubernetes node ports](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/kubernetes-node-ports): Restrict access to Kubernetes NodePort services using a Calico Cloud GlobalNetworkPolicy at the host endpoint. +- [Apply Calico Cloud policy to services exposed externally as cluster IPs](https://docs.tigera.io/calico-cloud/network-policy/beginners/services/services-cluster-ips): Expose Kubernetes Service ClusterIPs over BGP using Calico Cloud and restrict who can reach them with network policy. +- [DNS policy](https://docs.tigera.io/calico-cloud/network-policy/domain-based-policy): Allow traffic to external destinations by DNS name using Calico Cloud domain-based policy rules — without maintaining static IP lists. +- [Application layer policies to control ingress traffic](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/): Restrict ingress traffic to Calico Cloud workloads by HTTP method, path, or other Layer-7 attributes using application-layer policy. +- [Enable and enforce application layer policies](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/alp): Configure access controls based on Layer-7 attributes by enforcing Calico Cloud application-layer policy in connected clusters. +- [Application layer policy tutorial](https://docs.tigera.io/calico-cloud/network-policy/application-layer-policies/alp-tutorial): Step-by-step tutorial for applying Calico Cloud application-layer policy to workloads in a connected cluster — control ingress traffic by HTTP attributes. +- [Policy for firewalls](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/): Integrate Calico Cloud policy with existing perimeter firewalls — extend rule scope from Kubernetes workloads out to the network edge. +- [Fortinet firewall integrations](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/): Calico Cloud integrations with Fortinet firewalls — FortiGate for traffic enforcement and FortiManager for policy management. +- [Determine the best Calico Cloud/Fortinet solution](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/overview): Integrate Kubernetes clusters with existing Fortinet firewall workflows using Calico Cloud — architecture, components, and what each side enforces. +- [Extend Kubernetes to Fortinet firewall devices](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/firewall-integration): Use a FortiGate firewall to control egress traffic from Kubernetes workloads in a Calico Cloud connected cluster. +- [Extend FortiManager firewall policies to Kubernetes](https://docs.tigera.io/calico-cloud/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration): Extend FortiManager firewall policies into Kubernetes workloads in a Calico Cloud connected cluster. +- [Protect Kubernetes nodes](https://docs.tigera.io/calico-cloud/network-policy/hosts/kubernetes-nodes): Protect Kubernetes node interfaces with Calico Cloud host endpoints to extend network policy to the node itself. +- [Apply policy to forwarded traffic](https://docs.tigera.io/calico-cloud/network-policy/hosts/host-forwarded-traffic): Apply Calico Cloud network policy to traffic forwarded through hosts acting as routers or NAT gateways. +- [Policy for extreme traffic](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/): Apply Calico Cloud network policy early in the Linux packet-processing pipeline to handle DoS, high-connection, and other extreme traffic scenarios. +- [Enable extreme high-connection workloads](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/high-connection-workloads): Bypass Linux conntrack with a Calico Cloud policy rule for workloads that handle an extreme number of concurrent connections. +- [Defend against DoS attacks](https://docs.tigera.io/calico-cloud/network-policy/extreme-traffic/defend-dos-attack): Define DoS mitigation rules in Calico Cloud policy that drop connections at the eBPF or XDP layer, with hardware offload when available. ## Observability @@ -169,6 +169,8 @@ ## Compliance and security - [Compliance and security](https://docs.tigera.io/calico-cloud/compliance/): Get reports on Kubernetes workloads and environments for regulatory compliance. +- [Istio Ambient Mode](https://docs.tigera.io/calico-cloud/compliance/istio/about-istio-ambient): An overview of Calico's bundled version of Istio Ambient Mode +- [Deploy Istio Ambient Mode on your cluster](https://docs.tigera.io/calico-cloud/compliance/istio/deploy-istio-ambient): This page explains how to deploy Calico's bundled version of Istio in ambient mode. - [Enable compliance reports](https://docs.tigera.io/calico-cloud/compliance/enable-compliance): Enable compliance reports to configure reports to assess compliance for all assets in a Kubernetes cluster. - [Schedule and run compliance reports](https://docs.tigera.io/calico-cloud/compliance/overview): Get the reports for regulatory compliance on Kubernetes workloads and environments. - [Configure CIS benchmark reports](https://docs.tigera.io/calico-cloud/compliance/compliance-reports-cis): Configure reports to assess compliance for all assets in a Kubernetes cluster. diff --git a/static/calico-enterprise/llms-full.txt b/static/calico-enterprise/llms-full.txt index 332ea2b771..d155f4ef26 100644 --- a/static/calico-enterprise/llms-full.txt +++ b/static/calico-enterprise/llms-full.txt @@ -416,157 +416,157 @@ Requirements and guides for installing Calico Enterprise on Kubernetes clusters ##### [Quickstart for Calico Enterprise on Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart) -[Install Calico Enterprise on a single-host Kubernetes cluster for testing or development.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart) +[Stand up Calico Enterprise on a single-host Kubernetes cluster in about an hour for testing, demos, or development — not intended for production.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart) ##### [Support and compatibility](https://docs.tigera.io/calico-enterprise/latest/getting-started/compatibility) -[Lists versions of Calico Enterprise and Kubernetes for each platform.](https://docs.tigera.io/calico-enterprise/latest/getting-started/compatibility) +[Supported combinations of Calico Enterprise, Kubernetes, OpenShift, and host platforms for each Calico Enterprise release.](https://docs.tigera.io/calico-enterprise/latest/getting-started/compatibility) ## Installing[​](#installing) ##### [Quickstart for Calico Enterprise on Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart) -[Install Calico Enterprise on a single-host Kubernetes cluster for testing or development.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart) +[Stand up Calico Enterprise on a single-host Kubernetes cluster in about an hour for testing, demos, or development — not intended for production.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart) ##### [Options for installing Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/options-install) -[Learn about API-driven installation and how to customize your installation configuration.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/options-install) +[Customize a Calico Enterprise installation by editing the Installation resource — IP pools, MTU, registries, BGP, and operator behavior.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/options-install) ##### [Standard](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/generic-install) -[Install Calico Enterprise on a kubeadm-provisioned Kubernetes cluster for on-premises deployments.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/generic-install) +[Install Calico Enterprise on a kubeadm-provisioned Kubernetes cluster running on-premises hardware or VMs.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/generic-install) ##### [Helm](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/helm) -[Install Calico Enterprise using Helm application package manager.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/helm) +[Install Calico Enterprise on a Kubernetes cluster using the Helm 3 package manager.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/helm) ##### [System requirements](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/requirements) -[Review requirements for using OpenShift with Calico Enterprise.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/requirements) +[Cluster, OpenShift, and host OS requirements you must meet before installing Calico Enterprise on an OpenShift 4 cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/requirements) ##### [Install Calico Enterprise on OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/installation) -[Install Calico Enterprise on an OpenShift 4 cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/installation) +[Install Calico Enterprise on a self-managed OpenShift 4 cluster using the Tigera Operator.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/installation) ##### [Charmed Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/charmed-k8s) -[Install Calico Enterprise on a Charmed Kubernetes cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/charmed-k8s) +[Install Calico Enterprise on a Canonical Charmed Kubernetes cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/charmed-k8s) ##### [Microsoft Azure Kubernetes Service (AKS)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aks) -[Install Calico Enterprise for an AKS cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aks) +[Install Calico Enterprise on an Azure Kubernetes Service (AKS) cluster, including the steps that differ from a self-managed install.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aks) ##### [Amazon Elastic Kubernetes Service (EKS)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/eks) -[Enable Calico network policy in EKS.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/eks) +[Install the full Calico Enterprise stack — including observability, threat defense, and tiered policy — on an Amazon EKS cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/eks) ##### [Google Kubernetes Engine (GKE)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/gke) -[Enable Calico network policy in GKE.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/gke) +[Install the full Calico Enterprise stack — including observability, threat defense, and tiered policy — on a Google Kubernetes Engine (GKE) cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/gke) ##### [kOps on AWS](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aws) -[Install Calico Enterprise with a self-managed Kubernetes cluster using kOps on AWS.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aws) +[Install Calico Enterprise on a self-managed Kubernetes cluster provisioned with kOps on Amazon Web Services.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aws) ##### [Mirantis Kubernetes Engine (MKE 3)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/docker-enterprise) -[Install Calico Enterprise on an MKE 3 cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/docker-enterprise) +[Install Calico Enterprise on a Mirantis Kubernetes Engine (MKE) 3 cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/docker-enterprise) ##### [Rancher Kubernetes Engine (RKE)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher) -[Install Calico Enterprise on RKE.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher) +[Install Calico Enterprise on a Rancher Kubernetes Engine (RKE) cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher) ##### [RKE2](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rke2) -[Install Calico Enterprise on an RKE2 cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rke2) +[Install Calico Enterprise on an RKE2 cluster using the standard command-line installer.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rke2) ##### [Rancher UI](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher-ui) -[Install Calico Enterprise on a RKE2 cluster using the Rancher UI.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher-ui) +[Install Calico Enterprise on an RKE2 cluster from the Rancher UI rather than the command line.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher-ui) ##### [Tanzu Kubernetes Grid (TKG)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/tkg) -[Install Calico Enterprise on Tanzu Kubernetes Grid.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/tkg) +[Install Calico Enterprise on a VMware Tanzu Kubernetes Grid (TKG) cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/tkg) ## Installing from a private registry[​](#installing-from-a-private-registry) ##### [Install from a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular) -[Install and configure Calico Enterprise in a private registry.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular) +[Install Calico Enterprise from a private container registry using the standard image paths.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular) ##### [Install from an image path in a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-image-path) -[Install and configure Calico Enterprise using an image path in a private registry.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-image-path) +[Install Calico Enterprise from a private registry that uses a non-default image path or repository structure.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-image-path) ## Installing on Windows[​](#installing-on-windows) ##### [Limitations and known issues](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/limitations) -[Review limitations before starting installation.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/limitations) +[Known limitations of Calico Enterprise for Windows that you should review before planning an installation.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/limitations) ##### [Requirements](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/requirements) -[Review requirements for Calico Enterprise for Windows.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/requirements) +[Cluster and Windows host requirements you must meet before installing Calico Enterprise for Windows.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/requirements) ##### [Install using Operator](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/operator) -[Install Calico Enterprise for Windows on a Kubernetes cluster for testing or development.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/operator) +[Install Calico Enterprise for Windows on a Kubernetes cluster using the operator, for testing or development.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/operator) ##### [Install Calico Enterprise for Windows on RKE](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/rancher) -[Install Calico Enterprise for Windows on RKE.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/rancher) +[Install Calico Enterprise for Windows on a Rancher Kubernetes Engine (RKE) cluster with Windows worker nodes.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/rancher) ##### [Basic policy demo](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/demo) -[An interactive demo to show how to apply basic network policy to pods in a Calico Enterprise for Windows cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/demo) +[Interactive demo that applies basic Calico Enterprise network policy to pods running on a Windows node.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/demo) ##### [Configure flow logs for workloads](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/flowlogs) -[Configure flow logs for Calico Enterprise for Windows workloads.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/flowlogs) +[Configure flow logs for Calico Enterprise for Windows workloads so traffic activity is captured for observability and forensics.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/flowlogs) ##### [Configure DNS policy for workloads](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/dnspolicy) -[Configure DNS policy for Calico Enterprise for Windows workloads.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/dnspolicy) +[Configure DNS policy for Calico Enterprise for Windows workloads to control egress to external services by hostname.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/dnspolicy) ##### [Troubleshoot Calico Enterprise for Windows](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/troubleshoot) -[Help for troubleshooting Calico Enterprise for Windows issues.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/troubleshoot) +[Troubleshooting guide for Calico Enterprise for Windows clusters — common issues, diagnostic steps, and where to look for logs.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/troubleshoot) ## Upgrading[​](#upgrading) ##### [Upgrade Calico Enterprise installed with Helm](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm) -[Upgrade to a newer version of Calico Enterprise installed with Helm.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm) +[Upgrade a Helm-installed Calico Enterprise cluster on Kubernetes to a newer version.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm) ##### [Upgrade Calico Enterprise installed with the operator](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator) -[Upgrading from an earlier release of Calico Enterprise with the operator.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator) +[Upgrade an operator-installed Calico Enterprise cluster on Kubernetes to a newer version.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator) ##### [Upgrade Calico Enterprise installed with OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/openshift-upgrade) -[Upgrade to a newer version of Calico Enterprise installed with OpenShift.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/openshift-upgrade) +[Upgrade an existing Calico Enterprise installation on an OpenShift 4 cluster to a newer version.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/openshift-upgrade) ##### [Upgrade from Calico to Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/standard) -[Steps to upgrade from open source Calico to Calico Enterprise.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/standard) +[Upgrade from an operator-installed Calico Open Source cluster to Calico Enterprise on Kubernetes.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/standard) ##### [Upgrade Calico to Calico Enterprise installed with Helm](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/helm) -[Upgrade to Calico Enterprise from Calico installed with Helm.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/helm) +[Upgrade from a Helm-installed Calico Open Source cluster to Calico Enterprise on Kubernetes.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/helm) ##### [Upgrade from Calico to Calico Enterprise on OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee-openshift) -[Steps to upgrade from open source Calico to Calico Enterprise on OpenShift.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee-openshift) +[Upgrade from Calico Open Source to Calico Enterprise on an OpenShift 4 cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee-openshift) ##### [Install a patch release](https://docs.tigera.io/calico-enterprise/latest/getting-started/manifest-archive) -[Install an older patch release of Calico Enterprise.](https://docs.tigera.io/calico-enterprise/latest/getting-started/manifest-archive) +[Install an older patch release of Calico Enterprise from the manifest archive when an upgrade to the latest is not yet possible.](https://docs.tigera.io/calico-enterprise/latest/getting-started/manifest-archive) ## Non-cluster hosts[​](#non-cluster-hosts) ##### [Install Calico on non-cluster hosts and VMs](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/about) -[Install Calico on non-cluster hosts and VMs](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/about) +[Install Calico Enterprise on non-cluster hosts and VMs to apply Calico network policy and capture flow logs for workloads running outside Kubernetes.](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/about) ### Support and compatibility @@ -710,23 +710,23 @@ The following list shows the browsers supported by Calico Enterprise in this rel ## [📄️Microsoft Azure Kubernetes Service (AKS)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aks) -[Install Calico Enterprise for an AKS cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aks) +[Install Calico Enterprise on an Azure Kubernetes Service (AKS) cluster, including the steps that differ from a self-managed install.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aks) ## [📄️Amazon Elastic Kubernetes Service (EKS)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/eks) -[Enable Calico network policy in EKS.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/eks) +[Install the full Calico Enterprise stack — including observability, threat defense, and tiered policy — on an Amazon EKS cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/eks) ## [📄️Google Kubernetes Engine (GKE)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/gke) -[Enable Calico network policy in GKE.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/gke) +[Install the full Calico Enterprise stack — including observability, threat defense, and tiered policy — on a Google Kubernetes Engine (GKE) cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/gke) ## [📄️kOps on AWS](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aws) -[Install Calico Enterprise with a self-managed Kubernetes cluster using kOps on AWS.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aws) +[Install Calico Enterprise on a self-managed Kubernetes cluster provisioned with kOps on Amazon Web Services.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aws) ## [📄️Mirantis Kubernetes Engine (MKE 3)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/docker-enterprise) -[Install Calico Enterprise on an MKE 3 cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/docker-enterprise) +[Install Calico Enterprise on a Mirantis Kubernetes Engine (MKE) 3 cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/docker-enterprise) ## [📄️Mirantis Kubernetes Engine 4k (MKE 4k)](https://docs.tigera.io/calico-enterprise/latest/installation/install-calico-enterprise-mke-4k) @@ -734,23 +734,23 @@ The following list shows the browsers supported by Calico Enterprise in this rel ## [📄️Rancher Kubernetes Engine (RKE)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher) -[Install Calico Enterprise on RKE.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher) +[Install Calico Enterprise on a Rancher Kubernetes Engine (RKE) cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher) ## [📄️RKE2](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rke2) -[Install Calico Enterprise on an RKE2 cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rke2) +[Install Calico Enterprise on an RKE2 cluster using the standard command-line installer.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rke2) ## [📄️Rancher UI](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher-ui) -[Install Calico Enterprise on a RKE2 cluster using the Rancher UI.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher-ui) +[Install Calico Enterprise on an RKE2 cluster from the Rancher UI rather than the command line.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher-ui) ## [📄️Tanzu Kubernetes Grid (TKG)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/tkg) -[Install Calico Enterprise on Tanzu Kubernetes Grid.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/tkg) +[Install Calico Enterprise on a VMware Tanzu Kubernetes Grid (TKG) cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/tkg) ## [📄️Charmed Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/charmed-k8s) -[Install Calico Enterprise on a Charmed Kubernetes cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/charmed-k8s) +[Install Calico Enterprise on a Canonical Charmed Kubernetes cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/charmed-k8s) ## [🗃Calico Enterprise for Windows](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/) @@ -762,11 +762,11 @@ The following list shows the browsers supported by Calico Enterprise in this rel ## [📄️Get a license](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/calico-enterprise) -[Get a license to install Calico Enterprise.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/calico-enterprise) +[How to obtain a Calico Enterprise license file before starting an installation.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/calico-enterprise) ## [📄️System requirements](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/requirements) -[Review requirements to install Calico Enterprise networking and network policy.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/requirements) +[Cluster, host, and platform requirements you must meet before installing Calico Enterprise networking and network policy.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/requirements) ### Kubernetes @@ -774,19 +774,19 @@ The following list shows the browsers supported by Calico Enterprise in this rel ## [📄️Quickstart for Calico Enterprise on Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart) -[Install Calico Enterprise on a single-host Kubernetes cluster for testing or development.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart) +[Stand up Calico Enterprise on a single-host Kubernetes cluster in about an hour for testing, demos, or development — not intended for production.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart) ## [📄️Options for installing Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/options-install) -[Learn about API-driven installation and how to customize your installation configuration.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/options-install) +[Customize a Calico Enterprise installation by editing the Installation resource — IP pools, MTU, registries, BGP, and operator behavior.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/options-install) ## [📄️Standard](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/generic-install) -[Install Calico Enterprise on a kubeadm-provisioned Kubernetes cluster for on-premises deployments.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/generic-install) +[Install Calico Enterprise on a kubeadm-provisioned Kubernetes cluster running on-premises hardware or VMs.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/generic-install) ## [📄️Helm](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/helm) -[Install Calico Enterprise using Helm application package manager.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/helm) +[Install Calico Enterprise on a Kubernetes cluster using the Helm 3 package manager.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/helm) ### Quickstart for Calico Enterprise on Kubernetes @@ -870,9 +870,9 @@ A Linux host that meets the following requirements. 2. Install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 3. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -880,7 +880,7 @@ A Linux host that meets the following requirements. > **SECONDARY:** If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher. ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 4. Install your pull secret. @@ -898,13 +898,13 @@ A Linux host that meets the following requirements. 5. Optional: Compliance and packet capture features are optional. To enable these features during installation, download and review the custom-resources.yaml file. Uncomment the necessary CRs and use this custom-resources.yaml for installation. ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` 6. Install the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` Monitor progress with the following command: @@ -1139,11 +1139,11 @@ The geeky details of what you get: 1. Install the Tigera Operator and custom resource definitions. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -1161,7 +1161,7 @@ The geeky details of what you get: > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. Install your pull secret. @@ -1185,13 +1185,13 @@ The geeky details of what you get: 5. (Optional) Compliance and packet capture features are optional. To enable these features during installation, download and review the custom-resources.yaml file. Uncomment the necessary CRs and use this custom-resources.yaml for installation. ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` 6. Install the Tigera custom resources. For more information on configuration options available, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` You can now monitor progress with the following command: @@ -1265,7 +1265,7 @@ helm repo add tigera-ee https://downloads.tigera.io/ee/charts helm repo update -helm pull tigera-ee/tigera-operator --version v3.22.3 +helm pull tigera-ee/tigera-operator --version v3.22.4 ``` ### Prepare the Installation Configuration[​](#prepare-the-installation-configuration) @@ -1321,13 +1321,13 @@ To install a standard Calico Enterprise cluster with Helm: 2. Optional: Compliance and packetcapture features are optional. To enable these features, review the `values.yaml` file and set the flag to `enabled: true`. In the next step, use this modified `values.yaml` for the Helm install. ```bash - helm show values ./tigera-operator-v3.22.3-0.tgz >values.yaml + helm show values ./tigera-operator-v3.22.4-0.tgz >values.yaml ``` 3. Install the Tigera Operator and custom resource definitions using the Helm 3 chart: ```bash - helm install calico-enterprise tigera-operator-v3.22.3-0.tgz \ + helm install calico-enterprise tigera-operator-v3.22.4-0.tgz \ --set-file imagePullSecrets.tigera-pull-secret=,tigera-prometheus-operator.imagePullSecrets.tigera-pull-secret= \ @@ -1339,7 +1339,7 @@ To install a standard Calico Enterprise cluster with Helm: or if you created a `values.yaml` above: ```bash - helm install calico-enterprise tigera-operator-v3.22.3-0.tgz -f values.yaml \ + helm install calico-enterprise tigera-operator-v3.22.4-0.tgz -f values.yaml \ --set-file imagePullSecrets.tigera-pull-secret=,tigera-prometheus-operator.imagePullSecrets.tigera-pull-secret= \ @@ -1383,19 +1383,19 @@ To install a standard Calico Enterprise cluster with Helm: ## [📄️System requirements](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/requirements) -[Review requirements for using OpenShift with Calico Enterprise.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/requirements) +[Cluster, OpenShift, and host OS requirements you must meet before installing Calico Enterprise on an OpenShift 4 cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/requirements) ## [📄️Install Calico Enterprise on OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/installation) -[Install Calico Enterprise on an OpenShift 4 cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/installation) +[Install Calico Enterprise on a self-managed OpenShift 4 cluster using the Tigera Operator.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/installation) ## [📄️Install Calico Enterprise on an OpenShift HCP cluster](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/hostedcontrolplanes) -[Install Calico Enterprise on an OpenShift Hosted Control Planes (HCP) cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/hostedcontrolplanes) +[Install Calico Enterprise on an OpenShift Hosted Control Planes (HCP) cluster, where the control plane is managed and the data plane runs on user-owned nodes.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/hostedcontrolplanes) ## [📄️Install Calico Enterprise on a Red Hat OpenShift on AWS (ROSA) cluster](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/rosa) -[Install Calico Enterprise on a Red Hat OpenShift on AWS (ROSA) cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/rosa) +[Install Calico Enterprise on a Red Hat OpenShift Service on AWS (ROSA) cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/rosa) ### System requirements @@ -1677,7 +1677,7 @@ Download the Calico Enterprise manifests for OpenShift and add t ```bash mkdir calico -wget -qO- https://downloads.tigera.io/ee/v3.22.3/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico +wget -qO- https://downloads.tigera.io/ee/v3.22.4/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico cp calico/* manifests/ ``` @@ -1747,7 +1747,7 @@ oc create -f Apply the custom resources for enterprise features. ```bash -oc create -f https://downloads.tigera.io/ee/v3.22.3/manifests/ocp/tigera-enterprise-resources.yaml +oc create -f https://downloads.tigera.io/ee/v3.22.4/manifests/ocp/tigera-enterprise-resources.yaml ``` Apply the Calico Enterprise manifests for the Prometheus operator. @@ -1773,7 +1773,7 @@ Apply the Calico Enterprise manifests for the Prometheus operato > that you manage yourself. ```bash -oc create -f https://downloads.tigera.io/ee/v3.22.3/manifests/ocp/tigera-prometheus-operator.yaml +oc create -f https://downloads.tigera.io/ee/v3.22.4/manifests/ocp/tigera-prometheus-operator.yaml ``` You can now monitor progress with the following command: @@ -1787,7 +1787,7 @@ When it shows all components with status `Available`, proceed to the next step. (Optional) Apply the full CRDs including descriptions. ```bash -oc apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml +oc apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ## Next steps[​](#next-steps) @@ -1931,7 +1931,7 @@ Download the Calico Enterprise manifests for OpenShift: ```bash mkdir calico -wget -qO- https://downloads.tigera.io/ee/v3.22.3/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico +wget -qO- https://downloads.tigera.io/ee/v3.22.4/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico ``` ### Add an image pull secret[​](#add-an-image-pull-secret) @@ -2011,7 +2011,7 @@ oc create -f Apply the custom resources for enterprise features. ```bash -oc create -f https://downloads.tigera.io/ee/v3.22.3/manifests/ocp/tigera-enterprise-resources.yaml +oc create -f https://downloads.tigera.io/ee/v3.22.4/manifests/ocp/tigera-enterprise-resources.yaml ``` Apply the Calico Enterprise manifests for the Prometheus operator. @@ -2037,7 +2037,7 @@ Apply the Calico Enterprise manifests for the Prometheus operato > that you manage yourself. ```bash -oc create -f https://downloads.tigera.io/ee/v3.22.3/manifests/ocp/tigera-prometheus-operator.yaml +oc create -f https://downloads.tigera.io/ee/v3.22.4/manifests/ocp/tigera-prometheus-operator.yaml ``` You can now monitor progress with the following command: @@ -2051,7 +2051,7 @@ When it shows all components with status `Available`, proceed to the next step. (Optional) Apply the full CRDs including descriptions. ```bash -oc apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml +oc apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ## Next steps[​](#next-steps) @@ -2154,7 +2154,7 @@ Download the Calico Enterprise manifests for OpenShift: ```bash mkdir calico -wget -qO- https://downloads.tigera.io/ee/v3.22.3/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico +wget -qO- https://downloads.tigera.io/ee/v3.22.4/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico ``` ### Add an image pull secret[​](#add-an-image-pull-secret) @@ -2234,7 +2234,7 @@ oc create -f Apply the custom resources for enterprise features. ```bash -oc create -f https://downloads.tigera.io/ee/v3.22.3/manifests/ocp/tigera-enterprise-resources.yaml +oc create -f https://downloads.tigera.io/ee/v3.22.4/manifests/ocp/tigera-enterprise-resources.yaml ``` Apply the Calico Enterprise manifests for the Prometheus operator. @@ -2260,7 +2260,7 @@ Apply the Calico Enterprise manifests for the Prometheus operato > that you manage yourself. ```bash -oc create -f https://downloads.tigera.io/ee/v3.22.3/manifests/ocp/tigera-prometheus-operator.yaml +oc create -f https://downloads.tigera.io/ee/v3.22.4/manifests/ocp/tigera-prometheus-operator.yaml ``` You can now monitor progress with the following command: @@ -2274,7 +2274,7 @@ When it shows all components with status `Available`, proceed to the next step. (Optional) Apply the full CRDs including descriptions. ```bash -oc apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml +oc apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ## Next steps[​](#next-steps) @@ -2373,11 +2373,11 @@ Install Calico Enterprise on an AKS managed Kubernetes cluster. 1. Install the Tigera Operator and custom resource definitions. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -2395,7 +2395,7 @@ Install Calico Enterprise on an AKS managed Kubernetes cluster. > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. Install your pull secret. @@ -2415,7 +2415,7 @@ Install Calico Enterprise on an AKS managed Kubernetes cluster. 5. Install the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/aks/custom-resources.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/aks/custom-resources.yaml ``` You can now monitor progress with the following command: @@ -2433,11 +2433,11 @@ Wait until the `apiserver` shows a status of `Available`, then proceed toCalico Enterprise metrics. @@ -2455,7 +2455,7 @@ Wait until the `apiserver` shows a status of `Available`, then proceed toCalico Enterprise metrics. @@ -2579,7 +2579,7 @@ Install Calico Enterprise on an EKS managed Kubernetes cluster. > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. Install your pull secret. @@ -2599,7 +2599,7 @@ Install Calico Enterprise on an EKS managed Kubernetes cluster. 5. Install the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/eks/custom-resources.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/eks/custom-resources.yaml ``` You can now monitor progress with the following command: @@ -2639,11 +2639,11 @@ Before you get started, make sure you have downloaded and configured the 2. Install the Tigera Operator and custom resource definitions. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 3. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -2661,7 +2661,7 @@ Before you get started, make sure you have downloaded and configured the > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 4. Install your pull secret. @@ -2681,7 +2681,7 @@ Before you get started, make sure you have downloaded and configured the 6. To configure Calico Enterprise for use with the Calico CNI plugin, we must create an `Installation` resource that has `spec.cni.type: Calico`. Install the `custom-resources-calico-cni.yaml` manifest, which includes this configuration. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/eks/custom-resources-calico-cni.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/eks/custom-resources-calico-cni.yaml ``` 7. Finally, add nodes to the cluster. @@ -2779,11 +2779,11 @@ The geeky details of what you get: 1. Install the Tigera Operator and custom resource definitions. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -2801,7 +2801,7 @@ The geeky details of what you get: > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. Install your pull secret. @@ -2821,7 +2821,7 @@ The geeky details of what you get: 5. Install the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` You can now monitor progress with the following command: @@ -3080,9 +3080,9 @@ The geeky details of what you get: 3. Install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 4. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -3090,7 +3090,7 @@ The geeky details of what you get: > **SECONDARY:** If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher. ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 5. Install your pull secret. @@ -3110,7 +3110,7 @@ The geeky details of what you get: 7. Install the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` Monitor progress with the following command: @@ -3251,9 +3251,9 @@ In a new terminal, install the Calico Enterprise CNI. 2. Install the Tigera Operator and custom resource definitions. ```bash - kubectl apply --server-side -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl apply --server-side -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 3. Install your pull secret. @@ -3275,13 +3275,13 @@ In a new terminal, install the Calico Enterprise CNI. 5. Optional: Compliance and packet capture features are optional. To enable these features during installation, download and review the `custom-resources.yaml` file. Uncomment the necessary CRs and use this `custom-resources.yaml` for installation. ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` 6. Install the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` 7. Restrict privileged container access in the `tigera-elasticsearch` namespace to only the necessary Tigera and Elasticsearch service accounts using an MKE admission policy annotation. @@ -3383,9 +3383,9 @@ The geeky details of what you get: 2. Install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 3. Install the Prometheus operator and related custom resource definitions. The Prometheus operator is used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -3393,7 +3393,7 @@ The geeky details of what you get: > **SECONDARY:** If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher. ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 4. Install your pull secret. @@ -3413,7 +3413,7 @@ The geeky details of what you get: 6. Install the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` Monitor progress with the following command: @@ -3506,9 +3506,9 @@ The geeky details of what you get: 2. Install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 3. Install the Prometheus operator and related custom resource definitions. The Prometheus operator is used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -3516,7 +3516,7 @@ The geeky details of what you get: > **SECONDARY:** If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher. ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 4. Install your pull secret. @@ -3536,7 +3536,7 @@ The geeky details of what you get: 6. Install the Tigera custom resources. For more information on configuration options available, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/rancher/custom-resources-rke2.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/rancher/custom-resources-rke2.yaml ``` Monitor progress with the following command: @@ -3771,11 +3771,11 @@ The geeky details of what you get: 1. Install the Tigera Operator and custom resource definitions. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -3793,7 +3793,7 @@ The geeky details of what you get: > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. Install your pull secret. @@ -3817,13 +3817,13 @@ The geeky details of what you get: 5. (Optional) Compliance and packet capture features are optional. To enable these features during installation, download and review the custom-resources.yaml file. Uncomment the necessary CRs and use this custom-resources.yaml for installation. ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` 6. Install the Tigera custom resources. For more information on configuration options available, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` You can now monitor progress with the following command: @@ -4438,11 +4438,11 @@ To create a Charmed Kubernetes cluster without a CNI, you can customize your dep 1. Install the Tigera Operator and custom resource definitions. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -4460,7 +4460,7 @@ To create a Charmed Kubernetes cluster without a CNI, you can customize your dep > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. Install your pull secret. @@ -4484,13 +4484,13 @@ To create a Charmed Kubernetes cluster without a CNI, you can customize your dep 5. (Optional) Compliance and packet capture features are optional. To enable these features during installation, download and review the custom-resources.yaml file. Uncomment the necessary CRs and use this custom-resources.yaml for installation. ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` 6. Install the Tigera custom resources. For more information on configuration options available, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` You can now monitor progress with the following command: @@ -4535,35 +4535,35 @@ watch kubectl get tigerastatus ## [📄️Limitations and known issues](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/limitations) -[Review limitations before starting installation.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/limitations) +[Known limitations of Calico Enterprise for Windows that you should review before planning an installation.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/limitations) ## [📄️Requirements](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/requirements) -[Review requirements for Calico Enterprise for Windows.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/requirements) +[Cluster and Windows host requirements you must meet before installing Calico Enterprise for Windows.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/requirements) ## [📄️Install using Operator](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/operator) -[Install Calico Enterprise for Windows on a Kubernetes cluster for testing or development.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/operator) +[Install Calico Enterprise for Windows on a Kubernetes cluster using the operator, for testing or development.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/operator) ## [📄️Install Calico Enterprise for Windows on RKE](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/rancher) -[Install Calico Enterprise for Windows on RKE.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/rancher) +[Install Calico Enterprise for Windows on a Rancher Kubernetes Engine (RKE) cluster with Windows worker nodes.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/rancher) ## [📄️Basic policy demo](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/demo) -[An interactive demo to show how to apply basic network policy to pods in a Calico Enterprise for Windows cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/demo) +[Interactive demo that applies basic Calico Enterprise network policy to pods running on a Windows node.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/demo) ## [📄️Configure flow logs for workloads](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/flowlogs) -[Configure flow logs for Calico Enterprise for Windows workloads.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/flowlogs) +[Configure flow logs for Calico Enterprise for Windows workloads so traffic activity is captured for observability and forensics.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/flowlogs) ## [📄️Configure DNS policy for workloads](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/dnspolicy) -[Configure DNS policy for Calico Enterprise for Windows workloads.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/dnspolicy) +[Configure DNS policy for Calico Enterprise for Windows workloads to control egress to external services by hostname.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/dnspolicy) ## [📄️Troubleshoot Calico Enterprise for Windows](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/troubleshoot) -[Help for troubleshooting Calico Enterprise for Windows issues.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/troubleshoot) +[Troubleshooting guide for Calico Enterprise for Windows clusters — common issues, diagnostic steps, and where to look for logs.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/troubleshoot) ### Limitations and known issues @@ -5299,15 +5299,15 @@ The following steps will outline the installation of Calico Enterprise networkin 1. Install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Download the necessary Installation custom resources. ```bash - wget https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + wget https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` 3. Update the `calicoNetwork` options, ensuring that the correct pod CIDR is set. (Rancher uses `10.42.0.0/16` by default.) Below are sample installations for VXLAN and BGP networking using the default Rancher pod CIDR: @@ -6559,11 +6559,11 @@ Check that `bpfEnabled=false` (or is not present at all in the felixconfiguratio ## [📄️Install from a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular) -[Install and configure Calico Enterprise in a private registry.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular) +[Install Calico Enterprise from a private container registry using the standard image paths.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular) ## [📄️Install from an image path in a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-image-path) -[Install and configure Calico Enterprise using an image path in a private registry.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-image-path) +[Install Calico Enterprise from a private registry that uses a non-default image path or repository structure.](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-image-path) ### Install from a private registry @@ -6600,331 +6600,331 @@ In order to install images from your private registry, you must first pull the i 1. Use the following commands to pull the required Calico Enterprise images. 2. ```bash - docker pull quay.io/tigera/operator:v1.40.9 + docker pull quay.io/tigera/operator:v1.40.10 - docker pull quay.io/tigera/alertmanager:v3.22.3 + docker pull quay.io/tigera/alertmanager:v3.22.4 - docker pull quay.io/tigera/calicoctl:v3.22.3 + docker pull quay.io/tigera/calicoctl:v3.22.4 - docker pull quay.io/tigera/calicoq:v3.22.3 + docker pull quay.io/tigera/calicoq:v3.22.4 - docker pull quay.io/tigera/apiserver:v3.22.3 + docker pull quay.io/tigera/apiserver:v3.22.4 - docker pull quay.io/tigera/kube-controllers:v3.22.3 + docker pull quay.io/tigera/kube-controllers:v3.22.4 - docker pull quay.io/tigera/manager:v3.22.3 + docker pull quay.io/tigera/manager:v3.22.4 - docker pull quay.io/tigera/node:v3.22.3 + docker pull quay.io/tigera/node:v3.22.4 - docker pull quay.io/tigera/queryserver:v3.22.3 + docker pull quay.io/tigera/queryserver:v3.22.4 - docker pull quay.io/tigera/compliance-benchmarker:v3.22.3 + docker pull quay.io/tigera/compliance-benchmarker:v3.22.4 - docker pull quay.io/tigera/compliance-controller:v3.22.3 + docker pull quay.io/tigera/compliance-controller:v3.22.4 - docker pull quay.io/tigera/compliance-reporter:v3.22.3 + docker pull quay.io/tigera/compliance-reporter:v3.22.4 - docker pull quay.io/tigera/compliance-server:v3.22.3 + docker pull quay.io/tigera/compliance-server:v3.22.4 - docker pull quay.io/tigera/compliance-snapshotter:v3.22.3 + docker pull quay.io/tigera/compliance-snapshotter:v3.22.4 - docker pull quay.io/tigera/csi:v3.22.3 + docker pull quay.io/tigera/csi:v3.22.4 - docker pull quay.io/tigera/node-driver-registrar:v3.22.3 + docker pull quay.io/tigera/node-driver-registrar:v3.22.4 - docker pull quay.io/tigera/deep-packet-inspection:v3.22.3 + docker pull quay.io/tigera/deep-packet-inspection:v3.22.4 - docker pull quay.io/tigera/dex:v3.22.3 + docker pull quay.io/tigera/dex:v3.22.4 - docker pull quay.io/tigera/dikastes:v3.22.3 + docker pull quay.io/tigera/dikastes:v3.22.4 - docker pull quay.io/tigera/egress-gateway:v3.22.3 + docker pull quay.io/tigera/egress-gateway:v3.22.4 - docker pull quay.io/tigera/intrusion-detection-job-installer:v3.22.3 + docker pull quay.io/tigera/intrusion-detection-job-installer:v3.22.4 - docker pull quay.io/tigera/elasticsearch:v3.22.3 + docker pull quay.io/tigera/elasticsearch:v3.22.4 - docker pull quay.io/tigera/elasticsearch-metrics:v3.22.3 + docker pull quay.io/tigera/elasticsearch-metrics:v3.22.4 - docker pull quay.io/tigera/eck-operator:v3.22.3 + docker pull quay.io/tigera/eck-operator:v3.22.4 - docker pull quay.io/tigera/envoy:v3.22.3 + docker pull quay.io/tigera/envoy:v3.22.4 - docker pull quay.io/tigera/es-gateway:v3.22.3 + docker pull quay.io/tigera/es-gateway:v3.22.4 - docker pull quay.io/tigera/firewall-integration:v3.22.3 + docker pull quay.io/tigera/firewall-integration:v3.22.4 - docker pull quay.io/tigera/pod2daemon-flexvol:v3.22.3 + docker pull quay.io/tigera/pod2daemon-flexvol:v3.22.4 - docker pull quay.io/tigera/fluentd:v3.22.3 + docker pull quay.io/tigera/fluentd:v3.22.4 - docker pull quay.io/tigera/envoy-gateway:v3.22.3 + docker pull quay.io/tigera/envoy-gateway:v3.22.4 - docker pull quay.io/tigera/envoy-proxy:v3.22.3 + docker pull quay.io/tigera/envoy-proxy:v3.22.4 - docker pull quay.io/tigera/envoy-ratelimit:v3.22.3 + docker pull quay.io/tigera/envoy-ratelimit:v3.22.4 - docker pull quay.io/tigera/guardian:v3.22.3 + docker pull quay.io/tigera/guardian:v3.22.4 - docker pull quay.io/tigera/ingress-collector:v3.22.3 + docker pull quay.io/tigera/ingress-collector:v3.22.4 - docker pull quay.io/tigera/intrusion-detection-controller:v3.22.3 + docker pull quay.io/tigera/intrusion-detection-controller:v3.22.4 - docker pull quay.io/tigera/key-cert-provisioner:v3.22.3 + docker pull quay.io/tigera/key-cert-provisioner:v3.22.4 - docker pull quay.io/tigera/kibana:v3.22.3 + docker pull quay.io/tigera/kibana:v3.22.4 - docker pull quay.io/tigera/l7-admission-controller:v3.22.3 + docker pull quay.io/tigera/l7-admission-controller:v3.22.4 - docker pull quay.io/tigera/l7-collector:v3.22.3 + docker pull quay.io/tigera/l7-collector:v3.22.4 - docker pull quay.io/tigera/license-agent:v3.22.3 + docker pull quay.io/tigera/license-agent:v3.22.4 - docker pull quay.io/tigera/linseed:v3.22.3 + docker pull quay.io/tigera/linseed:v3.22.4 - docker pull quay.io/tigera/packetcapture:v3.22.3 + docker pull quay.io/tigera/packetcapture:v3.22.4 - docker pull quay.io/tigera/policy-recommendation:v3.22.3 + docker pull quay.io/tigera/policy-recommendation:v3.22.4 - docker pull quay.io/tigera/prometheus:v3.22.3 + docker pull quay.io/tigera/prometheus:v3.22.4 - docker pull quay.io/tigera/prometheus-config-reloader:v3.22.3 + docker pull quay.io/tigera/prometheus-config-reloader:v3.22.4 - docker pull quay.io/tigera/prometheus-operator:v3.22.3 + docker pull quay.io/tigera/prometheus-operator:v3.22.4 - docker pull quay.io/tigera/cni:v3.22.3 + docker pull quay.io/tigera/cni:v3.22.4 - docker pull quay.io/tigera/prometheus-service:v3.22.3 + docker pull quay.io/tigera/prometheus-service:v3.22.4 - docker pull quay.io/tigera/typha:v3.22.3 + docker pull quay.io/tigera/typha:v3.22.4 - docker pull quay.io/tigera/ui-apis:v3.22.3 + docker pull quay.io/tigera/ui-apis:v3.22.4 - docker pull quay.io/tigera/voltron:v3.22.3 + docker pull quay.io/tigera/voltron:v3.22.4 - docker pull quay.io/tigera/waf-http-filter:v3.22.3 + docker pull quay.io/tigera/waf-http-filter:v3.22.4 - docker pull quay.io/tigera/webhooks-processor:v3.22.3 + docker pull quay.io/tigera/webhooks-processor:v3.22.4 ``` Retag the images with the name of your private registry `$PRIVATE_REGISTRY`. ```bash - docker tag quay.io/tigera/operator:v1.40.9 $PRIVATE_REGISTRY/tigera/operator:v1.40.9 + docker tag quay.io/tigera/operator:v1.40.10 $PRIVATE_REGISTRY/tigera/operator:v1.40.10 - docker tag quay.io/tigera/alertmanager:v3.22.3 $PRIVATE_REGISTRY/tigera/alertmanager:v3.22.3 + docker tag quay.io/tigera/alertmanager:v3.22.4 $PRIVATE_REGISTRY/tigera/alertmanager:v3.22.4 - docker tag quay.io/tigera/calicoctl:v3.22.3 $PRIVATE_REGISTRY/tigera/calicoctl:v3.22.3 + docker tag quay.io/tigera/calicoctl:v3.22.4 $PRIVATE_REGISTRY/tigera/calicoctl:v3.22.4 - docker tag quay.io/tigera/calicoq:v3.22.3 $PRIVATE_REGISTRY/tigera/calicoq:v3.22.3 + docker tag quay.io/tigera/calicoq:v3.22.4 $PRIVATE_REGISTRY/tigera/calicoq:v3.22.4 - docker tag quay.io/tigera/apiserver:v3.22.3 $PRIVATE_REGISTRY/tigera/apiserver:v3.22.3 + docker tag quay.io/tigera/apiserver:v3.22.4 $PRIVATE_REGISTRY/tigera/apiserver:v3.22.4 - docker tag quay.io/tigera/kube-controllers:v3.22.3 $PRIVATE_REGISTRY/tigera/kube-controllers:v3.22.3 + docker tag quay.io/tigera/kube-controllers:v3.22.4 $PRIVATE_REGISTRY/tigera/kube-controllers:v3.22.4 - docker tag quay.io/tigera/manager:v3.22.3 $PRIVATE_REGISTRY/tigera/manager:v3.22.3 + docker tag quay.io/tigera/manager:v3.22.4 $PRIVATE_REGISTRY/tigera/manager:v3.22.4 - docker tag quay.io/tigera/node:v3.22.3 $PRIVATE_REGISTRY/tigera/node:v3.22.3 + docker tag quay.io/tigera/node:v3.22.4 $PRIVATE_REGISTRY/tigera/node:v3.22.4 - docker tag quay.io/tigera/queryserver:v3.22.3 $PRIVATE_REGISTRY/tigera/queryserver:v3.22.3 + docker tag quay.io/tigera/queryserver:v3.22.4 $PRIVATE_REGISTRY/tigera/queryserver:v3.22.4 - docker tag quay.io/tigera/compliance-benchmarker:v3.22.3 $PRIVATE_REGISTRY/tigera/compliance-benchmarker:v3.22.3 + docker tag quay.io/tigera/compliance-benchmarker:v3.22.4 $PRIVATE_REGISTRY/tigera/compliance-benchmarker:v3.22.4 - docker tag quay.io/tigera/compliance-controller:v3.22.3 $PRIVATE_REGISTRY/tigera/compliance-controller:v3.22.3 + docker tag quay.io/tigera/compliance-controller:v3.22.4 $PRIVATE_REGISTRY/tigera/compliance-controller:v3.22.4 - docker tag quay.io/tigera/compliance-reporter:v3.22.3 $PRIVATE_REGISTRY/tigera/compliance-reporter:v3.22.3 + docker tag quay.io/tigera/compliance-reporter:v3.22.4 $PRIVATE_REGISTRY/tigera/compliance-reporter:v3.22.4 - docker tag quay.io/tigera/compliance-server:v3.22.3 $PRIVATE_REGISTRY/tigera/compliance-server:v3.22.3 + docker tag quay.io/tigera/compliance-server:v3.22.4 $PRIVATE_REGISTRY/tigera/compliance-server:v3.22.4 - docker tag quay.io/tigera/compliance-snapshotter:v3.22.3 $PRIVATE_REGISTRY/tigera/compliance-snapshotter:v3.22.3 + docker tag quay.io/tigera/compliance-snapshotter:v3.22.4 $PRIVATE_REGISTRY/tigera/compliance-snapshotter:v3.22.4 - docker tag quay.io/tigera/csi:v3.22.3 $PRIVATE_REGISTRY/tigera/csi:v3.22.3 + docker tag quay.io/tigera/csi:v3.22.4 $PRIVATE_REGISTRY/tigera/csi:v3.22.4 - docker tag quay.io/tigera/node-driver-registrar:v3.22.3 $PRIVATE_REGISTRY/tigera/node-driver-registrar:v3.22.3 + docker tag quay.io/tigera/node-driver-registrar:v3.22.4 $PRIVATE_REGISTRY/tigera/node-driver-registrar:v3.22.4 - docker tag quay.io/tigera/deep-packet-inspection:v3.22.3 $PRIVATE_REGISTRY/tigera/deep-packet-inspection:v3.22.3 + docker tag quay.io/tigera/deep-packet-inspection:v3.22.4 $PRIVATE_REGISTRY/tigera/deep-packet-inspection:v3.22.4 - docker tag quay.io/tigera/dex:v3.22.3 $PRIVATE_REGISTRY/tigera/dex:v3.22.3 + docker tag quay.io/tigera/dex:v3.22.4 $PRIVATE_REGISTRY/tigera/dex:v3.22.4 - docker tag quay.io/tigera/dikastes:v3.22.3 $PRIVATE_REGISTRY/tigera/dikastes:v3.22.3 + docker tag quay.io/tigera/dikastes:v3.22.4 $PRIVATE_REGISTRY/tigera/dikastes:v3.22.4 - docker tag quay.io/tigera/egress-gateway:v3.22.3 $PRIVATE_REGISTRY/tigera/egress-gateway:v3.22.3 + docker tag quay.io/tigera/egress-gateway:v3.22.4 $PRIVATE_REGISTRY/tigera/egress-gateway:v3.22.4 - docker tag quay.io/tigera/intrusion-detection-job-installer:v3.22.3 $PRIVATE_REGISTRY/tigera/intrusion-detection-job-installer:v3.22.3 + docker tag quay.io/tigera/intrusion-detection-job-installer:v3.22.4 $PRIVATE_REGISTRY/tigera/intrusion-detection-job-installer:v3.22.4 - docker tag quay.io/tigera/elasticsearch:v3.22.3 $PRIVATE_REGISTRY/tigera/elasticsearch:v3.22.3 + docker tag quay.io/tigera/elasticsearch:v3.22.4 $PRIVATE_REGISTRY/tigera/elasticsearch:v3.22.4 - docker tag quay.io/tigera/elasticsearch-metrics:v3.22.3 $PRIVATE_REGISTRY/tigera/elasticsearch-metrics:v3.22.3 + docker tag quay.io/tigera/elasticsearch-metrics:v3.22.4 $PRIVATE_REGISTRY/tigera/elasticsearch-metrics:v3.22.4 - docker tag quay.io/tigera/eck-operator:v3.22.3 $PRIVATE_REGISTRY/tigera/eck-operator:v3.22.3 + docker tag quay.io/tigera/eck-operator:v3.22.4 $PRIVATE_REGISTRY/tigera/eck-operator:v3.22.4 - docker tag quay.io/tigera/envoy:v3.22.3 $PRIVATE_REGISTRY/tigera/envoy:v3.22.3 + docker tag quay.io/tigera/envoy:v3.22.4 $PRIVATE_REGISTRY/tigera/envoy:v3.22.4 - docker tag quay.io/tigera/es-gateway:v3.22.3 $PRIVATE_REGISTRY/tigera/es-gateway:v3.22.3 + docker tag quay.io/tigera/es-gateway:v3.22.4 $PRIVATE_REGISTRY/tigera/es-gateway:v3.22.4 - docker tag quay.io/tigera/firewall-integration:v3.22.3 $PRIVATE_REGISTRY/tigera/firewall-integration:v3.22.3 + docker tag quay.io/tigera/firewall-integration:v3.22.4 $PRIVATE_REGISTRY/tigera/firewall-integration:v3.22.4 - docker tag quay.io/tigera/pod2daemon-flexvol:v3.22.3 $PRIVATE_REGISTRY/tigera/pod2daemon-flexvol:v3.22.3 + docker tag quay.io/tigera/pod2daemon-flexvol:v3.22.4 $PRIVATE_REGISTRY/tigera/pod2daemon-flexvol:v3.22.4 - docker tag quay.io/tigera/fluentd:v3.22.3 $PRIVATE_REGISTRY/tigera/fluentd:v3.22.3 + docker tag quay.io/tigera/fluentd:v3.22.4 $PRIVATE_REGISTRY/tigera/fluentd:v3.22.4 - docker tag quay.io/tigera/envoy-gateway:v3.22.3 $PRIVATE_REGISTRY/tigera/envoy-gateway:v3.22.3 + docker tag quay.io/tigera/envoy-gateway:v3.22.4 $PRIVATE_REGISTRY/tigera/envoy-gateway:v3.22.4 - docker tag quay.io/tigera/envoy-proxy:v3.22.3 $PRIVATE_REGISTRY/tigera/envoy-proxy:v3.22.3 + docker tag quay.io/tigera/envoy-proxy:v3.22.4 $PRIVATE_REGISTRY/tigera/envoy-proxy:v3.22.4 - docker tag quay.io/tigera/envoy-ratelimit:v3.22.3 $PRIVATE_REGISTRY/tigera/envoy-ratelimit:v3.22.3 + docker tag quay.io/tigera/envoy-ratelimit:v3.22.4 $PRIVATE_REGISTRY/tigera/envoy-ratelimit:v3.22.4 - docker tag quay.io/tigera/guardian:v3.22.3 $PRIVATE_REGISTRY/tigera/guardian:v3.22.3 + docker tag quay.io/tigera/guardian:v3.22.4 $PRIVATE_REGISTRY/tigera/guardian:v3.22.4 - docker tag quay.io/tigera/ingress-collector:v3.22.3 $PRIVATE_REGISTRY/tigera/ingress-collector:v3.22.3 + docker tag quay.io/tigera/ingress-collector:v3.22.4 $PRIVATE_REGISTRY/tigera/ingress-collector:v3.22.4 - docker tag quay.io/tigera/intrusion-detection-controller:v3.22.3 $PRIVATE_REGISTRY/tigera/intrusion-detection-controller:v3.22.3 + docker tag quay.io/tigera/intrusion-detection-controller:v3.22.4 $PRIVATE_REGISTRY/tigera/intrusion-detection-controller:v3.22.4 - docker tag quay.io/tigera/key-cert-provisioner:v3.22.3 $PRIVATE_REGISTRY/tigera/key-cert-provisioner:v3.22.3 + docker tag quay.io/tigera/key-cert-provisioner:v3.22.4 $PRIVATE_REGISTRY/tigera/key-cert-provisioner:v3.22.4 - docker tag quay.io/tigera/kibana:v3.22.3 $PRIVATE_REGISTRY/tigera/kibana:v3.22.3 + docker tag quay.io/tigera/kibana:v3.22.4 $PRIVATE_REGISTRY/tigera/kibana:v3.22.4 - docker tag quay.io/tigera/l7-admission-controller:v3.22.3 $PRIVATE_REGISTRY/tigera/l7-admission-controller:v3.22.3 + docker tag quay.io/tigera/l7-admission-controller:v3.22.4 $PRIVATE_REGISTRY/tigera/l7-admission-controller:v3.22.4 - docker tag quay.io/tigera/l7-collector:v3.22.3 $PRIVATE_REGISTRY/tigera/l7-collector:v3.22.3 + docker tag quay.io/tigera/l7-collector:v3.22.4 $PRIVATE_REGISTRY/tigera/l7-collector:v3.22.4 - docker tag quay.io/tigera/license-agent:v3.22.3 $PRIVATE_REGISTRY/tigera/license-agent:v3.22.3 + docker tag quay.io/tigera/license-agent:v3.22.4 $PRIVATE_REGISTRY/tigera/license-agent:v3.22.4 - docker tag quay.io/tigera/linseed:v3.22.3 $PRIVATE_REGISTRY/tigera/linseed:v3.22.3 + docker tag quay.io/tigera/linseed:v3.22.4 $PRIVATE_REGISTRY/tigera/linseed:v3.22.4 - docker tag quay.io/tigera/packetcapture:v3.22.3 $PRIVATE_REGISTRY/tigera/packetcapture:v3.22.3 + docker tag quay.io/tigera/packetcapture:v3.22.4 $PRIVATE_REGISTRY/tigera/packetcapture:v3.22.4 - docker tag quay.io/tigera/policy-recommendation:v3.22.3 $PRIVATE_REGISTRY/tigera/policy-recommendation:v3.22.3 + docker tag quay.io/tigera/policy-recommendation:v3.22.4 $PRIVATE_REGISTRY/tigera/policy-recommendation:v3.22.4 - docker tag quay.io/tigera/prometheus:v3.22.3 $PRIVATE_REGISTRY/tigera/prometheus:v3.22.3 + docker tag quay.io/tigera/prometheus:v3.22.4 $PRIVATE_REGISTRY/tigera/prometheus:v3.22.4 - docker tag quay.io/tigera/prometheus-config-reloader:v3.22.3 $PRIVATE_REGISTRY/tigera/prometheus-config-reloader:v3.22.3 + docker tag quay.io/tigera/prometheus-config-reloader:v3.22.4 $PRIVATE_REGISTRY/tigera/prometheus-config-reloader:v3.22.4 - docker tag quay.io/tigera/prometheus-operator:v3.22.3 $PRIVATE_REGISTRY/tigera/prometheus-operator:v3.22.3 + docker tag quay.io/tigera/prometheus-operator:v3.22.4 $PRIVATE_REGISTRY/tigera/prometheus-operator:v3.22.4 - docker tag quay.io/tigera/cni:v3.22.3 $PRIVATE_REGISTRY/tigera/cni:v3.22.3 + docker tag quay.io/tigera/cni:v3.22.4 $PRIVATE_REGISTRY/tigera/cni:v3.22.4 - docker tag quay.io/tigera/prometheus-service:v3.22.3 $PRIVATE_REGISTRY/tigera/prometheus-service:v3.22.3 + docker tag quay.io/tigera/prometheus-service:v3.22.4 $PRIVATE_REGISTRY/tigera/prometheus-service:v3.22.4 - docker tag quay.io/tigera/typha:v3.22.3 $PRIVATE_REGISTRY/tigera/typha:v3.22.3 + docker tag quay.io/tigera/typha:v3.22.4 $PRIVATE_REGISTRY/tigera/typha:v3.22.4 - docker tag quay.io/tigera/ui-apis:v3.22.3 $PRIVATE_REGISTRY/tigera/ui-apis:v3.22.3 + docker tag quay.io/tigera/ui-apis:v3.22.4 $PRIVATE_REGISTRY/tigera/ui-apis:v3.22.4 - docker tag quay.io/tigera/voltron:v3.22.3 $PRIVATE_REGISTRY/tigera/voltron:v3.22.3 + docker tag quay.io/tigera/voltron:v3.22.4 $PRIVATE_REGISTRY/tigera/voltron:v3.22.4 - docker tag quay.io/tigera/waf-http-filter:v3.22.3 $PRIVATE_REGISTRY/tigera/waf-http-filter:v3.22.3 + docker tag quay.io/tigera/waf-http-filter:v3.22.4 $PRIVATE_REGISTRY/tigera/waf-http-filter:v3.22.4 - docker tag quay.io/tigera/webhooks-processor:v3.22.3 $PRIVATE_REGISTRY/tigera/webhooks-processor:v3.22.3 + docker tag quay.io/tigera/webhooks-processor:v3.22.4 $PRIVATE_REGISTRY/tigera/webhooks-processor:v3.22.4 ``` 3. Push the images to your private registry. ```bash - docker push $PRIVATE_REGISTRY/tigera/operator:v1.40.9 + docker push $PRIVATE_REGISTRY/tigera/operator:v1.40.10 - docker push $PRIVATE_REGISTRY/tigera/alertmanager:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/alertmanager:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/calicoctl:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/calicoctl:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/calicoq:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/calicoq:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/apiserver:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/apiserver:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/kube-controllers:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/kube-controllers:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/manager:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/manager:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/node:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/node:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/queryserver:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/queryserver:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/compliance-benchmarker:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/compliance-benchmarker:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/compliance-controller:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/compliance-controller:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/compliance-reporter:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/compliance-reporter:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/compliance-server:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/compliance-server:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/compliance-snapshotter:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/compliance-snapshotter:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/csi:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/csi:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/node-driver-registrar:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/node-driver-registrar:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/deep-packet-inspection:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/deep-packet-inspection:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/dex:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/dex:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/dikastes:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/dikastes:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/egress-gateway:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/egress-gateway:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/intrusion-detection-job-installer:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/intrusion-detection-job-installer:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/elasticsearch:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/elasticsearch:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/elasticsearch-metrics:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/elasticsearch-metrics:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/eck-operator:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/eck-operator:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/envoy:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/envoy:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/es-gateway:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/es-gateway:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/firewall-integration:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/firewall-integration:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/pod2daemon-flexvol:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/pod2daemon-flexvol:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/fluentd:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/fluentd:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/envoy-gateway:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/envoy-gateway:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/envoy-proxy:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/envoy-proxy:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/envoy-ratelimit:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/envoy-ratelimit:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/guardian:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/guardian:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/ingress-collector:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/ingress-collector:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/intrusion-detection-controller:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/intrusion-detection-controller:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/key-cert-provisioner:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/key-cert-provisioner:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/kibana:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/kibana:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/l7-admission-controller:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/l7-admission-controller:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/l7-collector:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/l7-collector:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/license-agent:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/license-agent:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/linseed:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/linseed:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/packetcapture:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/packetcapture:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/policy-recommendation:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/policy-recommendation:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/prometheus:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/prometheus:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/prometheus-config-reloader:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/prometheus-config-reloader:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/prometheus-operator:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/prometheus-operator:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/cni:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/cni:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/prometheus-service:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/prometheus-service:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/typha:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/typha:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/ui-apis:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/ui-apis:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/voltron:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/voltron:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/waf-http-filter:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/waf-http-filter:v3.22.4 - docker push $PRIVATE_REGISTRY/tigera/webhooks-processor:v3.22.3 + docker push $PRIVATE_REGISTRY/tigera/webhooks-processor:v3.22.4 ``` > **WARNING:** @@ -6944,11 +6944,11 @@ In order to install images from your private registry, you must first pull the i For hybrid Linux + Windows clusters, use `crane cp` on the following Windows images to copy them to your private registry. ```bash - crane cp quay.io/tigera/node-windows:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/node-windows:v3.22.3 + crane cp quay.io/tigera/node-windows:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/node-windows:v3.22.4 - crane cp quay.io/tigera/fluentd-windows:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/fluentd-windows:v3.22.3 + crane cp quay.io/tigera/fluentd-windows:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/fluentd-windows:v3.22.4 - crane cp quay.io/tigera/cni-windows:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/cni-windows:v3.22.3 + crane cp quay.io/tigera/cni-windows:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/cni-windows:v3.22.4 ``` > **WARNING:** @@ -7060,329 +7060,329 @@ In order to install images from your private registry, you must first pull the i 1. Use the following commands to pull the required Calico Enterprise images. 2. ```bash - docker pull quay.io/tigera/operator:v1.40.9 + docker pull quay.io/tigera/operator:v1.40.10 - docker pull quay.io/tigera/alertmanager:v3.22.3 + docker pull quay.io/tigera/alertmanager:v3.22.4 - docker pull quay.io/tigera/calicoctl:v3.22.3 + docker pull quay.io/tigera/calicoctl:v3.22.4 - docker pull quay.io/tigera/calicoq:v3.22.3 + docker pull quay.io/tigera/calicoq:v3.22.4 - docker pull quay.io/tigera/apiserver:v3.22.3 + docker pull quay.io/tigera/apiserver:v3.22.4 - docker pull quay.io/tigera/kube-controllers:v3.22.3 + docker pull quay.io/tigera/kube-controllers:v3.22.4 - docker pull quay.io/tigera/manager:v3.22.3 + docker pull quay.io/tigera/manager:v3.22.4 - docker pull quay.io/tigera/node:v3.22.3 + docker pull quay.io/tigera/node:v3.22.4 - docker pull quay.io/tigera/queryserver:v3.22.3 + docker pull quay.io/tigera/queryserver:v3.22.4 - docker pull quay.io/tigera/compliance-benchmarker:v3.22.3 + docker pull quay.io/tigera/compliance-benchmarker:v3.22.4 - docker pull quay.io/tigera/compliance-controller:v3.22.3 + docker pull quay.io/tigera/compliance-controller:v3.22.4 - docker pull quay.io/tigera/compliance-reporter:v3.22.3 + docker pull quay.io/tigera/compliance-reporter:v3.22.4 - docker pull quay.io/tigera/compliance-server:v3.22.3 + docker pull quay.io/tigera/compliance-server:v3.22.4 - docker pull quay.io/tigera/compliance-snapshotter:v3.22.3 + docker pull quay.io/tigera/compliance-snapshotter:v3.22.4 - docker pull quay.io/tigera/csi:v3.22.3 + docker pull quay.io/tigera/csi:v3.22.4 - docker pull quay.io/tigera/node-driver-registrar:v3.22.3 + docker pull quay.io/tigera/node-driver-registrar:v3.22.4 - docker pull quay.io/tigera/deep-packet-inspection:v3.22.3 + docker pull quay.io/tigera/deep-packet-inspection:v3.22.4 - docker pull quay.io/tigera/dex:v3.22.3 + docker pull quay.io/tigera/dex:v3.22.4 - docker pull quay.io/tigera/dikastes:v3.22.3 + docker pull quay.io/tigera/dikastes:v3.22.4 - docker pull quay.io/tigera/egress-gateway:v3.22.3 + docker pull quay.io/tigera/egress-gateway:v3.22.4 - docker pull quay.io/tigera/intrusion-detection-job-installer:v3.22.3 + docker pull quay.io/tigera/intrusion-detection-job-installer:v3.22.4 - docker pull quay.io/tigera/elasticsearch:v3.22.3 + docker pull quay.io/tigera/elasticsearch:v3.22.4 - docker pull quay.io/tigera/elasticsearch-metrics:v3.22.3 + docker pull quay.io/tigera/elasticsearch-metrics:v3.22.4 - docker pull quay.io/tigera/eck-operator:v3.22.3 + docker pull quay.io/tigera/eck-operator:v3.22.4 - docker pull quay.io/tigera/envoy:v3.22.3 + docker pull quay.io/tigera/envoy:v3.22.4 - docker pull quay.io/tigera/es-gateway:v3.22.3 + docker pull quay.io/tigera/es-gateway:v3.22.4 - docker pull quay.io/tigera/firewall-integration:v3.22.3 + docker pull quay.io/tigera/firewall-integration:v3.22.4 - docker pull quay.io/tigera/pod2daemon-flexvol:v3.22.3 + docker pull quay.io/tigera/pod2daemon-flexvol:v3.22.4 - docker pull quay.io/tigera/fluentd:v3.22.3 + docker pull quay.io/tigera/fluentd:v3.22.4 - docker pull quay.io/tigera/envoy-gateway:v3.22.3 + docker pull quay.io/tigera/envoy-gateway:v3.22.4 - docker pull quay.io/tigera/envoy-proxy:v3.22.3 + docker pull quay.io/tigera/envoy-proxy:v3.22.4 - docker pull quay.io/tigera/envoy-ratelimit:v3.22.3 + docker pull quay.io/tigera/envoy-ratelimit:v3.22.4 - docker pull quay.io/tigera/guardian:v3.22.3 + docker pull quay.io/tigera/guardian:v3.22.4 - docker pull quay.io/tigera/ingress-collector:v3.22.3 + docker pull quay.io/tigera/ingress-collector:v3.22.4 - docker pull quay.io/tigera/intrusion-detection-controller:v3.22.3 + docker pull quay.io/tigera/intrusion-detection-controller:v3.22.4 - docker pull quay.io/tigera/key-cert-provisioner:v3.22.3 + docker pull quay.io/tigera/key-cert-provisioner:v3.22.4 - docker pull quay.io/tigera/kibana:v3.22.3 + docker pull quay.io/tigera/kibana:v3.22.4 - docker pull quay.io/tigera/l7-admission-controller:v3.22.3 + docker pull quay.io/tigera/l7-admission-controller:v3.22.4 - docker pull quay.io/tigera/l7-collector:v3.22.3 + docker pull quay.io/tigera/l7-collector:v3.22.4 - docker pull quay.io/tigera/license-agent:v3.22.3 + docker pull quay.io/tigera/license-agent:v3.22.4 - docker pull quay.io/tigera/linseed:v3.22.3 + docker pull quay.io/tigera/linseed:v3.22.4 - docker pull quay.io/tigera/packetcapture:v3.22.3 + docker pull quay.io/tigera/packetcapture:v3.22.4 - docker pull quay.io/tigera/policy-recommendation:v3.22.3 + docker pull quay.io/tigera/policy-recommendation:v3.22.4 - docker pull quay.io/tigera/prometheus:v3.22.3 + docker pull quay.io/tigera/prometheus:v3.22.4 - docker pull quay.io/tigera/prometheus-config-reloader:v3.22.3 + docker pull quay.io/tigera/prometheus-config-reloader:v3.22.4 - docker pull quay.io/tigera/prometheus-operator:v3.22.3 + docker pull quay.io/tigera/prometheus-operator:v3.22.4 - docker pull quay.io/tigera/cni:v3.22.3 + docker pull quay.io/tigera/cni:v3.22.4 - docker pull quay.io/tigera/prometheus-service:v3.22.3 + docker pull quay.io/tigera/prometheus-service:v3.22.4 - docker pull quay.io/tigera/typha:v3.22.3 + docker pull quay.io/tigera/typha:v3.22.4 - docker pull quay.io/tigera/ui-apis:v3.22.3 + docker pull quay.io/tigera/ui-apis:v3.22.4 - docker pull quay.io/tigera/voltron:v3.22.3 + docker pull quay.io/tigera/voltron:v3.22.4 - docker pull quay.io/tigera/waf-http-filter:v3.22.3 + docker pull quay.io/tigera/waf-http-filter:v3.22.4 - docker pull quay.io/tigera/webhooks-processor:v3.22.3 + docker pull quay.io/tigera/webhooks-processor:v3.22.4 ``` Retag the images with the name of your private registry `$PRIVATE_REGISTRY` and `$IMAGE_PATH`. 3. ```bash - docker tag quay.io/tigera/operator:v1.40.9 $PRIVATE_REGISTRY/$IMAGE_PATH/operator:v1.40.9 + docker tag quay.io/tigera/operator:v1.40.10 $PRIVATE_REGISTRY/$IMAGE_PATH/operator:v1.40.10 - docker tag quay.io/tigera/alertmanager:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/alertmanager:v3.22.3 + docker tag quay.io/tigera/alertmanager:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/alertmanager:v3.22.4 - docker tag quay.io/tigera/calicoctl:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/calicoctl:v3.22.3 + docker tag quay.io/tigera/calicoctl:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/calicoctl:v3.22.4 - docker tag quay.io/tigera/calicoq:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/calicoq:v3.22.3 + docker tag quay.io/tigera/calicoq:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/calicoq:v3.22.4 - docker tag quay.io/tigera/apiserver:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/apiserver:v3.22.3 + docker tag quay.io/tigera/apiserver:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/apiserver:v3.22.4 - docker tag quay.io/tigera/kube-controllers:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/kube-controllers:v3.22.3 + docker tag quay.io/tigera/kube-controllers:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/kube-controllers:v3.22.4 - docker tag quay.io/tigera/manager:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/manager:v3.22.3 + docker tag quay.io/tigera/manager:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/manager:v3.22.4 - docker tag quay.io/tigera/node:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/node:v3.22.3 + docker tag quay.io/tigera/node:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/node:v3.22.4 - docker tag quay.io/tigera/queryserver:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/queryserver:v3.22.3 + docker tag quay.io/tigera/queryserver:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/queryserver:v3.22.4 - docker tag quay.io/tigera/compliance-benchmarker:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-benchmarker:v3.22.3 + docker tag quay.io/tigera/compliance-benchmarker:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-benchmarker:v3.22.4 - docker tag quay.io/tigera/compliance-controller:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-controller:v3.22.3 + docker tag quay.io/tigera/compliance-controller:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-controller:v3.22.4 - docker tag quay.io/tigera/compliance-reporter:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-reporter:v3.22.3 + docker tag quay.io/tigera/compliance-reporter:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-reporter:v3.22.4 - docker tag quay.io/tigera/compliance-server:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-server:v3.22.3 + docker tag quay.io/tigera/compliance-server:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-server:v3.22.4 - docker tag quay.io/tigera/compliance-snapshotter:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-snapshotter:v3.22.3 + docker tag quay.io/tigera/compliance-snapshotter:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-snapshotter:v3.22.4 - docker tag quay.io/tigera/csi:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/csi:v3.22.3 + docker tag quay.io/tigera/csi:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/csi:v3.22.4 - docker tag quay.io/tigera/node-driver-registrar:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/node-driver-registrar:v3.22.3 + docker tag quay.io/tigera/node-driver-registrar:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/node-driver-registrar:v3.22.4 - docker tag quay.io/tigera/deep-packet-inspection:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/deep-packet-inspection:v3.22.3 + docker tag quay.io/tigera/deep-packet-inspection:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/deep-packet-inspection:v3.22.4 - docker tag quay.io/tigera/dex:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/dex:v3.22.3 + docker tag quay.io/tigera/dex:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/dex:v3.22.4 - docker tag quay.io/tigera/dikastes:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/dikastes:v3.22.3 + docker tag quay.io/tigera/dikastes:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/dikastes:v3.22.4 - docker tag quay.io/tigera/egress-gateway:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/egress-gateway:v3.22.3 + docker tag quay.io/tigera/egress-gateway:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/egress-gateway:v3.22.4 - docker tag quay.io/tigera/intrusion-detection-job-installer:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/intrusion-detection-job-installer:v3.22.3 + docker tag quay.io/tigera/intrusion-detection-job-installer:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/intrusion-detection-job-installer:v3.22.4 - docker tag quay.io/tigera/elasticsearch:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/elasticsearch:v3.22.3 + docker tag quay.io/tigera/elasticsearch:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/elasticsearch:v3.22.4 - docker tag quay.io/tigera/elasticsearch-metrics:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/elasticsearch-metrics:v3.22.3 + docker tag quay.io/tigera/elasticsearch-metrics:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/elasticsearch-metrics:v3.22.4 - docker tag quay.io/tigera/eck-operator:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/eck-operator:v3.22.3 + docker tag quay.io/tigera/eck-operator:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/eck-operator:v3.22.4 - docker tag quay.io/tigera/envoy:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/envoy:v3.22.3 + docker tag quay.io/tigera/envoy:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/envoy:v3.22.4 - docker tag quay.io/tigera/es-gateway:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/es-gateway:v3.22.3 + docker tag quay.io/tigera/es-gateway:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/es-gateway:v3.22.4 - docker tag quay.io/tigera/firewall-integration:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/firewall-integration:v3.22.3 + docker tag quay.io/tigera/firewall-integration:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/firewall-integration:v3.22.4 - docker tag quay.io/tigera/pod2daemon-flexvol:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/pod2daemon-flexvol:v3.22.3 + docker tag quay.io/tigera/pod2daemon-flexvol:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/pod2daemon-flexvol:v3.22.4 - docker tag quay.io/tigera/fluentd:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/fluentd:v3.22.3 + docker tag quay.io/tigera/fluentd:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/fluentd:v3.22.4 - docker tag quay.io/tigera/envoy-gateway:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/envoy-gateway:v3.22.3 + docker tag quay.io/tigera/envoy-gateway:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/envoy-gateway:v3.22.4 - docker tag quay.io/tigera/envoy-proxy:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/envoy-proxy:v3.22.3 + docker tag quay.io/tigera/envoy-proxy:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/envoy-proxy:v3.22.4 - docker tag quay.io/tigera/envoy-ratelimit:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/envoy-ratelimit:v3.22.3 + docker tag quay.io/tigera/envoy-ratelimit:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/envoy-ratelimit:v3.22.4 - docker tag quay.io/tigera/guardian:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/guardian:v3.22.3 + docker tag quay.io/tigera/guardian:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/guardian:v3.22.4 - docker tag quay.io/tigera/ingress-collector:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/ingress-collector:v3.22.3 + docker tag quay.io/tigera/ingress-collector:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/ingress-collector:v3.22.4 - docker tag quay.io/tigera/intrusion-detection-controller:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/intrusion-detection-controller:v3.22.3 + docker tag quay.io/tigera/intrusion-detection-controller:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/intrusion-detection-controller:v3.22.4 - docker tag quay.io/tigera/key-cert-provisioner:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/key-cert-provisioner:v3.22.3 + docker tag quay.io/tigera/key-cert-provisioner:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/key-cert-provisioner:v3.22.4 - docker tag quay.io/tigera/kibana:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/kibana:v3.22.3 + docker tag quay.io/tigera/kibana:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/kibana:v3.22.4 - docker tag quay.io/tigera/l7-admission-controller:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/l7-admission-controller:v3.22.3 + docker tag quay.io/tigera/l7-admission-controller:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/l7-admission-controller:v3.22.4 - docker tag quay.io/tigera/l7-collector:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/l7-collector:v3.22.3 + docker tag quay.io/tigera/l7-collector:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/l7-collector:v3.22.4 - docker tag quay.io/tigera/license-agent:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/license-agent:v3.22.3 + docker tag quay.io/tigera/license-agent:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/license-agent:v3.22.4 - docker tag quay.io/tigera/linseed:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/linseed:v3.22.3 + docker tag quay.io/tigera/linseed:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/linseed:v3.22.4 - docker tag quay.io/tigera/packetcapture:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/packetcapture:v3.22.3 + docker tag quay.io/tigera/packetcapture:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/packetcapture:v3.22.4 - docker tag quay.io/tigera/policy-recommendation:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/policy-recommendation:v3.22.3 + docker tag quay.io/tigera/policy-recommendation:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/policy-recommendation:v3.22.4 - docker tag quay.io/tigera/prometheus:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus:v3.22.3 + docker tag quay.io/tigera/prometheus:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus:v3.22.4 - docker tag quay.io/tigera/prometheus-config-reloader:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus-config-reloader:v3.22.3 + docker tag quay.io/tigera/prometheus-config-reloader:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus-config-reloader:v3.22.4 - docker tag quay.io/tigera/prometheus-operator:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus-operator:v3.22.3 + docker tag quay.io/tigera/prometheus-operator:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus-operator:v3.22.4 - docker tag quay.io/tigera/cni:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/cni:v3.22.3 + docker tag quay.io/tigera/cni:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/cni:v3.22.4 - docker tag quay.io/tigera/prometheus-service:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus-service:v3.22.3 + docker tag quay.io/tigera/prometheus-service:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus-service:v3.22.4 - docker tag quay.io/tigera/typha:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/typha:v3.22.3 + docker tag quay.io/tigera/typha:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/typha:v3.22.4 - docker tag quay.io/tigera/ui-apis:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/ui-apis:v3.22.3 + docker tag quay.io/tigera/ui-apis:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/ui-apis:v3.22.4 - docker tag quay.io/tigera/voltron:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/voltron:v3.22.3 + docker tag quay.io/tigera/voltron:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/voltron:v3.22.4 - docker tag quay.io/tigera/waf-http-filter:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/waf-http-filter:v3.22.3 + docker tag quay.io/tigera/waf-http-filter:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/waf-http-filter:v3.22.4 - docker tag quay.io/tigera/webhooks-processor:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/webhooks-processor:v3.22.3 + docker tag quay.io/tigera/webhooks-processor:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/webhooks-processor:v3.22.4 ``` Push the images to your private registry. 4. ```bash - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/operator:v1.40.9docker push $PRIVATE_REGISTRY/$IMAGE_PATH/alertmanager:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/operator:v1.40.10docker push $PRIVATE_REGISTRY/$IMAGE_PATH/alertmanager:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/calicoctl:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/calicoctl:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/calicoq:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/calicoq:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/apiserver:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/apiserver:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/kube-controllers:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/kube-controllers:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/manager:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/manager:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/node:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/node:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/queryserver:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/queryserver:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-benchmarker:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-benchmarker:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-controller:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-controller:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-reporter:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-reporter:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-server:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-server:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-snapshotter:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/compliance-snapshotter:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/csi:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/csi:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/node-driver-registrar:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/node-driver-registrar:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/deep-packet-inspection:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/deep-packet-inspection:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/dex:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/dex:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/dikastes:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/dikastes:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/egress-gateway:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/egress-gateway:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/intrusion-detection-job-installer:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/intrusion-detection-job-installer:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/elasticsearch:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/elasticsearch:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/elasticsearch-metrics:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/elasticsearch-metrics:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/eck-operator:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/eck-operator:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/envoy:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/envoy:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/es-gateway:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/es-gateway:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/firewall-integration:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/firewall-integration:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/pod2daemon-flexvol:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/pod2daemon-flexvol:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/fluentd:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/fluentd:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/envoy-gateway:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/envoy-gateway:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/envoy-proxy:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/envoy-proxy:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/envoy-ratelimit:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/envoy-ratelimit:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/guardian:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/guardian:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/ingress-collector:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/ingress-collector:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/intrusion-detection-controller:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/intrusion-detection-controller:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/key-cert-provisioner:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/key-cert-provisioner:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/kibana:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/kibana:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/l7-admission-controller:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/l7-admission-controller:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/l7-collector:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/l7-collector:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/license-agent:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/license-agent:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/linseed:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/linseed:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/packetcapture:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/packetcapture:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/policy-recommendation:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/policy-recommendation:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus-config-reloader:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus-config-reloader:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus-operator:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus-operator:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/cni:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/cni:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus-service:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/prometheus-service:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/typha:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/typha:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/ui-apis:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/ui-apis:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/voltron:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/voltron:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/waf-http-filter:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/waf-http-filter:v3.22.4 - docker push $PRIVATE_REGISTRY/$IMAGE_PATH/webhooks-processor:v3.22.3 + docker push $PRIVATE_REGISTRY/$IMAGE_PATH/webhooks-processor:v3.22.4 ``` > **WARNING:** @@ -7402,11 +7402,11 @@ In order to install images from your private registry, you must first pull the i For hybrid Linux + Windows clusters, use `crane cp` on the following Windows images to copy them to your private registry. ```bash - crane cp quay.io/tigera/node-windows:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/node-windows:v3.22.3 + crane cp quay.io/tigera/node-windows:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/node-windows:v3.22.4 - crane cp quay.io/tigera/fluentd-windows:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/fluentd-windows:v3.22.3 + crane cp quay.io/tigera/fluentd-windows:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/fluentd-windows:v3.22.4 - crane cp quay.io/tigera/cni-windows:v3.22.3 $PRIVATE_REGISTRY/$IMAGE_PATH/cni-windows:v3.22.3 + crane cp quay.io/tigera/cni-windows:v3.22.4 $PRIVATE_REGISTRY/$IMAGE_PATH/cni-windows:v3.22.4 ``` > **WARNING:** @@ -7712,15 +7712,15 @@ Due to the large number of distributions and kernel version out there, it’s ha ## [📄️Install Calico on non-cluster hosts and VMs](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/about) -[Install Calico on non-cluster hosts and VMs](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/about) +[Install Calico Enterprise on non-cluster hosts and VMs to apply Calico network policy and capture flow logs for workloads running outside Kubernetes.](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/about) ## [📄️Use custom certificates for Node and Typha](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/typha-node-tls) -[Use custom TLS certificates for non-cluster Calico Node and Typha](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/typha-node-tls) +[Configure custom TLS certificates between non-cluster Calico Enterprise nodes and Typha for clusters with strict PKI requirements.](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/typha-node-tls) ## [📄️Troubleshoot non-cluster hosts and VMs setup](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/troubleshoot) -[Troubleshoot non-cluster hosts and VMs setup](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/troubleshoot) +[Troubleshooting guide for Calico Enterprise on non-cluster hosts and VMs — connectivity, agent registration, and policy issues.](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/troubleshoot) ### Install Calico on non-cluster hosts and VMs @@ -8058,7 +8058,7 @@ If you need to force immediate renewal, manually delete the existing certificate ## [📄️Upgrade Calico Enterprise installed with OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/openshift-upgrade) -[Upgrade to a newer version of Calico Enterprise installed with OpenShift.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/openshift-upgrade) +[Upgrade an existing Calico Enterprise installation on an OpenShift 4 cluster to a newer version.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/openshift-upgrade) ### Kubernetes @@ -8066,11 +8066,11 @@ If you need to force immediate renewal, manually delete the existing certificate ## [📄️Upgrade Calico Enterprise installed with Helm](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm) -[Upgrade to a newer version of Calico Enterprise installed with Helm.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm) +[Upgrade a Helm-installed Calico Enterprise cluster on Kubernetes to a newer version.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm) ## [📄️Upgrade Calico Enterprise installed with the operator](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator) -[Upgrading from an earlier release of Calico Enterprise with the operator.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator) +[Upgrade an operator-installed Calico Enterprise cluster on Kubernetes to a newer version.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator) ### Upgrade Calico Enterprise installed with Helm @@ -8117,17 +8117,17 @@ Calico Enterprise creates a default-deny for the calico-system namespace. If you 1. Get the Helm chart ```bash - curl -O -L https://downloads.tigera.io/ee/charts/tigera-operator-v3.22.3-0.tgz + curl -O -L https://downloads.tigera.io/ee/charts/tigera-operator-v3.22.4-0.tgz ``` 2. Install the Calico Enterprise custom resource definitions. ```bash - kubectl apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml - kubectl apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.3/manifests/prometheus-operator-crds.yaml + kubectl apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.4/manifests/prometheus-operator-crds.yaml - kubectl apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.3/manifests/eck-operator-crds.yaml + kubectl apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.4/manifests/eck-operator-crds.yaml ``` 3. If your cluster is v3.19 or older, update `values.yaml` with `packetCaptureAPI` enabled to true. @@ -8145,7 +8145,7 @@ Calico Enterprise creates a default-deny for the calico-system namespace. If you If you are using default `values.yaml`, copy the custom `values.yaml` and update packetCaptureAPI's `enabled` to `true`. Then, replace `` in the next step with this modified `values.yaml` for the Helm upgrade. ```bash - helm show values ./tigera-operator-v3.22.3-0.tgz >values.yaml + helm show values ./tigera-operator-v3.22.4-0.tgz >values.yaml ``` 4. Optional: Compliance and packetcapture features are optional. To enable or maintain the enabled status, review the `values.yaml` file and set the flag to `enabled: true`. @@ -8167,13 +8167,13 @@ Calico Enterprise creates a default-deny for the calico-system namespace. If you If you are using default `values.yaml`, copy the custom `values.yaml` and update compliance and packetCaptureAPI's `enabled` to `true`. Then, replace `` in the next step with this modified `values.yaml` for the Helm upgrade. ```bash - helm show values ./tigera-operator-v3.22.3-0.tgz >values.yaml + helm show values ./tigera-operator-v3.22.4-0.tgz >values.yaml ``` 5. Run the Helm upgrade command for `tigera-operator` and make sure to either update `values.yaml` with your configuration or use custom `values.yaml` file: ```bash -helm upgrade calico-enterprise --values= tigera-operator-v3.22.3-0.tgz \ +helm upgrade calico-enterprise --values= tigera-operator-v3.22.4-0.tgz \ --set-file imagePullSecrets.tigera-pull-secret=,tigera-prometheus-operator.imagePullSecrets.tigera-pull-secret= \ @@ -8251,7 +8251,7 @@ For Calico Enterprise, upgrading multi-cluster management setups must include up 1. Download the new manifests for Tigera Operator. ```bash - curl -L -O https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + curl -L -O https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Download the new manifests for Prometheus operator. @@ -8261,7 +8261,7 @@ For Calico Enterprise, upgrading multi-cluster management setups must include up > If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher. ```bash - curl -L -O https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + curl -L -O https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. If you previously [installed using a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry), you will need to [push the new images ](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular#push-calico-enterprise-images-to-your-private-registry)and then [update the manifest](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular#run-the-operator-using-images-from-your-private-registry) downloaded in the previous step. @@ -8419,7 +8419,7 @@ If the `active-namespace` is `tigera-operator-enterprise`, then the cluster was 1. Download the new manifests for Tigera Operator. ```bash - curl -L -o tigera-operator.yaml https://downloads.tigera.io/ee/v3.22.3/manifests/aks/tigera-operator-upgrade.yaml + curl -L -o tigera-operator.yaml https://downloads.tigera.io/ee/v3.22.4/manifests/aks/tigera-operator-upgrade.yaml ``` 2. Download the new manifests for Prometheus operator. @@ -8429,7 +8429,7 @@ If the `active-namespace` is `tigera-operator-enterprise`, then the cluster was > If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher. ```bash - curl -L -O https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + curl -L -O https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. If you previously [installed using a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry), you will need to [push the new images ](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular#push-calico-enterprise-images-to-your-private-registry)and then [update the manifest](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular#run-the-operator-using-images-from-your-private-registry) downloaded in the previous step. @@ -8653,7 +8653,7 @@ Download the Calico Enterprise manifests for OpenShift and add t ```bash mkdir calico -wget -qO- https://downloads.tigera.io/ee/v3.22.3/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico --exclude=03-cr-* --exclude=02-pull-secret.yaml +wget -qO- https://downloads.tigera.io/ee/v3.22.4/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico --exclude=03-cr-* --exclude=02-pull-secret.yaml cp calico/* manifests/ ``` @@ -8691,7 +8691,7 @@ cp calico/* manifests/ > that you manage yourself. ```bash - oc apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/ocp/tigera-prometheus-operator.yaml + oc apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/ocp/tigera-prometheus-operator.yaml ``` 3. If your cluster is a management cluster, apply a [ManagementCluster](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#managementcluster) CR to your cluster. @@ -8787,13 +8787,13 @@ cp calico/* manifests/ If your cluster is a **managed** cluster, run this command: ```bash - kubectl delete -f https://downloads.tigera.io/ee/v3.22.3/manifests/default-tier-policies-managed.yaml + kubectl delete -f https://downloads.tigera.io/ee/v3.22.4/manifests/default-tier-policies-managed.yaml ``` For other clusters, run this command: ```bash - kubectl delete -f https://downloads.tigera.io/ee/v3.22.3/manifests/default-tier-policies.yaml + kubectl delete -f https://downloads.tigera.io/ee/v3.22.4/manifests/default-tier-policies.yaml ``` ### Upgrade from Calico to Calico Enterprise @@ -8806,7 +8806,7 @@ cp calico/* manifests/ ## [📄️Upgrade from Calico to Calico Enterprise on OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee-openshift) -[Steps to upgrade from open source Calico to Calico Enterprise on OpenShift.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee-openshift) +[Upgrade from Calico Open Source to Calico Enterprise on an OpenShift 4 cluster.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee-openshift) ### Kubernetes @@ -8814,11 +8814,11 @@ cp calico/* manifests/ ## [📄️Upgrade from Calico to Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/standard) -[Steps to upgrade from open source Calico to Calico Enterprise.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/standard) +[Upgrade from an operator-installed Calico Open Source cluster to Calico Enterprise on Kubernetes.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/standard) ## [📄️Upgrade Calico to Calico Enterprise installed with Helm](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/helm) -[Upgrade to Calico Enterprise from Calico installed with Helm.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/helm) +[Upgrade from a Helm-installed Calico Open Source cluster to Calico Enterprise on Kubernetes.](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/helm) ### Upgrade from Calico to Calico Enterprise @@ -8865,7 +8865,7 @@ If you receive error indicating the custom resource definitions or resource type 1. Download the new manifests for Tigera Operator. ```bash - curl -L -O https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + curl -L -O https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Download the new manifests for Prometheus operator. @@ -8875,7 +8875,7 @@ If you receive error indicating the custom resource definitions or resource type > If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher. ```bash - curl -L -O https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + curl -L -O https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. If you previously [installed using a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry), you will need to [push the new images ](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular#push-calico-enterprise-images-to-your-private-registry)and then [update the manifest](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular#run-the-operator-using-images-from-your-private-registry) downloaded in the previous step. @@ -8907,7 +8907,7 @@ If you receive error indicating the custom resource definitions or resource type 7. Install the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources-upgrade-from-calico.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources-upgrade-from-calico.yaml ``` **Tab: EKS** @@ -8915,7 +8915,7 @@ If you receive error indicating the custom resource definitions or resource type 1. Download the new manifests for Tigera Operator. ```bash - curl -L -O https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + curl -L -O https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Download the new manifests for Prometheus operator. @@ -8925,7 +8925,7 @@ If you receive error indicating the custom resource definitions or resource type > If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher. ```bash - curl -L -O https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + curl -L -O https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. If you previously [installed using a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry), you will need to [push the new images ](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular#push-calico-enterprise-images-to-your-private-registry)and then [update the manifest](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular#run-the-operator-using-images-from-your-private-registry) downloaded in the previous step. @@ -8957,7 +8957,7 @@ If you receive error indicating the custom resource definitions or resource type 7. Install the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/eks/custom-resources-upgrade-from-calico.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/eks/custom-resources-upgrade-from-calico.yaml ``` **Tab: AKS** @@ -8983,7 +8983,7 @@ These upgrade instructions will upgrade your AKS clusters with Azure CNI and an 2. Download the new manifests for Tigera Operator. ```bash - curl -L -o tigera-operator.yaml https://downloads.tigera.io/ee/v3.22.3/manifests/aks/tigera-operator-upgrade.yaml + curl -L -o tigera-operator.yaml https://downloads.tigera.io/ee/v3.22.4/manifests/aks/tigera-operator-upgrade.yaml ``` 3. Download the new manifests for Prometheus operator. @@ -8993,7 +8993,7 @@ These upgrade instructions will upgrade your AKS clusters with Azure CNI and an > If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher. ```bash - curl -L -O https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + curl -L -O https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 4. If you previously [installed using a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry), you will need to [push the new images ](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular#push-calico-enterprise-images-to-your-private-registry)and then [update the manifest](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular#run-the-operator-using-images-from-your-private-registry) downloaded in the previous step. @@ -9025,7 +9025,7 @@ These upgrade instructions will upgrade your AKS clusters with Azure CNI and an 8. Download the custom resources manifest. ```bash - curl -L -o custom-resources.yaml https://downloads.tigera.io/ee/v3.22.3/manifests/aks/custom-resources-upgrade-from-calico.yaml + curl -L -o custom-resources.yaml https://downloads.tigera.io/ee/v3.22.4/manifests/aks/custom-resources-upgrade-from-calico.yaml ``` 9. If you are [installing using a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry), you will need to update the manifest downloaded in the previous step. Update the `spec.registry`, `spec.imagePath`, and `spec.imagePrefix` fields of the installation resource with the registry name, image path, and image prefix of your private registry. @@ -9088,17 +9088,17 @@ If you receive error indicating the custom resource definitions or resource type 1. Get the Helm chart ```bash - curl -O -L https://downloads.tigera.io/ee/charts/tigera-operator-v3.22.3-0.tgz + curl -O -L https://downloads.tigera.io/ee/charts/tigera-operator-v3.22.4-0.tgz ``` 2. Install the Calico Enterprise custom resource definitions. ```bash - kubectl apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/prometheus-operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/prometheus-operator-crds.yaml - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/eck-operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/eck-operator-crds.yaml ``` 3. [Configure a storage class for Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/operations/logstorage/create-storage) @@ -9106,7 +9106,7 @@ If you receive error indicating the custom resource definitions or resource type 4. Run the Helm upgrade command for `tigera-operator`: ```bash - helm upgrade calico tigera-operator-v3.22.3-0.tgz \ + helm upgrade calico tigera-operator-v3.22.4-0.tgz \ --set-file imagePullSecrets.tigera-pull-secret=,tigera-prometheus-operator.imagePullSecrets.tigera-pull-secret= \ @@ -9187,7 +9187,7 @@ Download the Calico Enterprise manifests for OpenShift and add t ```bash mkdir calico -wget -qO- https://downloads.tigera.io/ee/v3.22.3/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico --exclude=03-cr-* +wget -qO- https://downloads.tigera.io/ee/v3.22.4/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico --exclude=03-cr-* cp calico/* manifests/ ``` @@ -9219,7 +9219,7 @@ sed -i "s/SECRET/${SECRET}/" manifests/02-pull-secret.yaml 3. Create the custom resources for Calico Enterprise features, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - oc apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/ocp/tigera-enterprise-resources.yaml + oc apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/ocp/tigera-enterprise-resources.yaml ``` 4. Patch installation. @@ -9269,7 +9269,7 @@ Apply the Calico Enterprise manifests for the Prometheus operato > that you manage yourself. ```bash -oc apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/ocp/tigera-prometheus-operator.yaml +oc apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/ocp/tigera-prometheus-operator.yaml ``` You can now monitor progress with the following command: @@ -9297,6 +9297,7 @@ This feature is: | Patch version | Release archive link | | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | + | v3.22.4 | [https://downloads.tigera.io/ee/archives/release-v3.22.4-v1.40.10.tgz](https://downloads.tigera.io/ee/archives/release-v3.22.4-v1.40.10.tgz) | | v3.22.3 | [https://downloads.tigera.io/ee/archives/release-v3.22.3-v1.40.9.tgz](https://downloads.tigera.io/ee/archives/release-v3.22.3-v1.40.9.tgz) | | v3.22.2 | [https://downloads.tigera.io/ee/archives/release-v3.22.2-v1.40.6.tgz](https://downloads.tigera.io/ee/archives/release-v3.22.2-v1.40.6.tgz) | | v3.22.1 | [https://downloads.tigera.io/ee/archives/release-v3.22.1-v1.40.5.tgz](https://downloads.tigera.io/ee/archives/release-v3.22.1-v1.40.5.tgz) | @@ -9977,13 +9978,17 @@ You can specify core configuration elements of your ingress gateway by specifyin Many customizations are available for the `GatewayAPI` resource. This resource has fields that allow some aspects of Gateway deployments to be customized. For example: -- `spec.gatewayDeployment.spec.template.metadata` allows arbitrary labels or annotations to be added to the pod that is created to implement each configured Gateway. -- `spec.gatewayDeployment.spec.template.spec.nodeSelector` allows control over where gateway implementation pods are scheduled. -- `spec.gatewayDeployment.spec.template.spec.containers` allows control over the memory and CPU that the gateway implementation pods can use. - `spec.gatewayControllerDeployment.spec.template.spec.nodeSelector` allows control over where the gateway controller is scheduled. -- `spec.gatewayDeployment.service.metadata.annotations` allows control over annotations to place on the service that is provisioned for each Gateway; these can be used, for example, to configure the cloud-specific type and properties of the associated external load balancer. -- `spec.gatewayDeployment.service.spec.*loadbalancer*` allows control over the corresponding `*loadbalancer*` fields in the service that is provisioned for each gateway; in some clouds these can also be used to configure the type and properties of the associated external load balancer. -- `spec.gatewayClasses` allows the provisioning of multiple `GatewayClass` resources, each with its own set of customizations, instead of the single `tigera-gateway-class` gateway class that the Tigera Operator provisions by default. + +- `spec.gatewayClasses` allows the provisioning of multiple `GatewayClass` resources, each with its own set of customizations, instead of the single `tigera-gateway-class` gateway class that the Tigera Operator provisions by default. Possible `GatewayClass` customizations include the following: + + + + - `gatewayDeployment.spec.template.metadata` allows arbitrary labels or annotations to be added to the pod that is created to implement each Gateway within that class. + - `gatewayDeployment.spec.template.spec.nodeSelector` allows control over where gateway implementation pods are scheduled. + - `gatewayDeployment.spec.template.spec.containers` allows control over the memory and CPU that the gateway implementation pods can use. + - `gatewayService.metadata.annotations` allows control over annotations to place on the service that is provisioned for each Gateway; these can be used, for example, to configure the cloud-specific type and properties of the associated external load balancer. + - `gatewayService.spec.*loadBalancer*` allows control over the corresponding `*loadBalancer*` fields in the service that is provisioned for each gateway; in some clouds these can also be used to configure the type and properties of the associated external load balancer. For full details, see [the `GatewayAPI` reference documentation](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#gatewayapi). @@ -19414,109 +19419,109 @@ Writing network policies is how you restrict traffic to pods in your Kubernetes ##### [Policy best practices](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-best-practices) -[Learn policy best practices for security, scalability, and performance.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-best-practices) +[Best practices for Calico Enterprise policy — security posture, scalability with tiers, and performance tuning under load.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-best-practices) ##### [Enable a default deny policy for Kubernetes pods](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/kubernetes-default-deny) -[Create a default deny network policy so pods that are missing policy are not allowed traffic until appropriate network policy is defined.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/kubernetes-default-deny) +[Apply a default-deny network policy in a Calico Enterprise cluster so unprotected pods are denied traffic until explicit policy is written.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/kubernetes-default-deny) ##### [Get started with Calico network policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-network-policy) -[Create your first Calico network policies. Shows the rich features using sample policies that extend native Kubernetes network policy.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-network-policy) +[Write your first Calico Enterprise NetworkPolicy — sample policies that exercise the rich rule features beyond Kubernetes NetworkPolicy.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-network-policy) ##### [Get started with network sets](https://docs.tigera.io/calico-enterprise/latest/network-policy/networksets) -[Learn the power of network sets and why you should create them.](https://docs.tigera.io/calico-enterprise/latest/network-policy/networksets) +[Use Calico Enterprise network sets to package frequently reused IP ranges or domains into named selectors that policies can reference.](https://docs.tigera.io/calico-enterprise/latest/network-policy/networksets) ##### [DNS policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/domain-based-policy) -[Use domain names to allow traffic to destinations outside of a cluster by their DNS names instead of by their IP addresses.](https://docs.tigera.io/calico-enterprise/latest/network-policy/domain-based-policy) +[Allow traffic to external destinations by DNS name using Calico Enterprise domain-based policy rules — without maintaining static IP lists.](https://docs.tigera.io/calico-enterprise/latest/network-policy/domain-based-policy) ##### [Enable policy recommendations](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/policy-recommendations) -[Enable continuous policy recommendations to secure unprotected namespaces or workloads.](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/policy-recommendations) +[Run continuous Calico Enterprise policy recommendations so unprotected namespaces and workloads pick up baseline policy automatically.](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/policy-recommendations) ## Policy rules[​](#policy-rules) ##### [Basic rules](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/policy-rules-overview) -[Define network connectivity for Calico endpoints using policy rules and label selectors.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/policy-rules-overview) +[How to write policy rules in Calico Enterprise — label selectors, source and destination match criteria, and rule actions.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/policy-rules-overview) ##### [Use namespace rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/namespace-policy) -[Use namespaces and namespace selectors in Calico network policy to group or separate resources. Use network policies to allow or deny traffic to/from pods that belong to specific namespaces.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/namespace-policy) +[Group or separate workloads in Calico Enterprise policy using namespaces and namespace selectors so policies apply only to specified namespaces.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/namespace-policy) ##### [Use service rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-policy) -[Use Kubernetes Service names in policy rules.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-policy) +[Match on Kubernetes Service names in Calico Enterprise policy rules instead of specific pod selectors.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-policy) ##### [Use service accounts rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-accounts) -[Use Kubernetes service accounts in policies to validate cryptographic identities and/or manage RBAC controlled high-priority rules across teams.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-accounts) +[Match on Kubernetes service accounts in Calico Enterprise policy rules to validate workload identity and apply RBAC-controlled rules.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-accounts) ##### [Use external IPs or networks rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/external-ips-policy) -[Limit egress and ingress traffic using IP address either directly within Calico network policy or managed as Calico network sets.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/external-ips-policy) +[Restrict egress and ingress to specific IP ranges in Calico Enterprise policy, either inline or via reusable network sets.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/external-ips-policy) ##### [Use ICMP/ping rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/icmp-ping) -[Control where ICMP/ping is used by creating a Calico network policy to allow and deny ICMP/ping messages for workloads and host endpoints.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/icmp-ping) +[Allow or deny ICMP and ping traffic for Calico Enterprise workloads and host endpoints using policy rules.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/icmp-ping) ## Policy for hosts and VMs[​](#policy-for-hosts-and-vms) ##### [Protect hosts and VMs](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts) -[Create Calico Enterprise network policies to restrict traffic to/from hosts.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts) +[Protect Kubernetes hosts and bare-metal nodes with Calico Enterprise policy by writing rules that target host endpoints.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts) ##### [Protect Kubernetes nodes](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/kubernetes-nodes) -[Protect Kubernetes nodes with host endpoints managed by Calico Enterprise.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/kubernetes-nodes) +[Protect Kubernetes node interfaces with Calico Enterprise host endpoints to extend network policy to the node itself.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/kubernetes-nodes) ##### [Protect hosts tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts-tutorial) -[Learn how to secure incoming traffic from outside the cluster using Calico host endpoints with network policy, including allowing controlled access to specific Kubernetes services.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts-tutorial) +[Tutorial for protecting hosts in a Calico Enterprise cluster — register host endpoints, write rules, and allow controlled access to specific Kubernetes services.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts-tutorial) ##### [Apply policy to forwarded traffic](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/host-forwarded-traffic) -[Apply Calico Enterprise network policy to traffic being forward by hosts acting as routers or NAT gateways.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/host-forwarded-traffic) +[Apply Calico Enterprise network policy to traffic forwarded through hosts acting as routers or NAT gateways.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/host-forwarded-traffic) ## Policy tiers[​](#policy-tiers) ##### [Get started with policy tiers](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/tiered-policy) -[Understand how tiered policy works and supports microsegmentation.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/tiered-policy) +[How tiered policy works in Calico Enterprise — evaluation order, pass actions, and using tiers to enforce microsegmentation across teams.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/tiered-policy) ##### [Change allow-tigera tier behavior](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/allow-tigera) -[Understand how to change the behavior of the allow-tigera tier.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/allow-tigera) +[Customize the behavior of the allow-tigera tier that Calico Enterprise installs by default to keep its own components reachable.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/allow-tigera) ##### [Network policy tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/policy-tutorial-ui) -[Covers the basics of Calico Enterprise network policy.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/policy-tutorial-ui) +[Tutorial for the Calico Enterprise policy management UI — author, order, and stage policies inside tiers from the web console.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/policy-tutorial-ui) ##### [Configure RBAC for tiered policies](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/rbac-tiered-policies) -[Configure RBAC to control access to policies and tiers.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/rbac-tiered-policies) +[Configure Kubernetes RBAC to control which users can edit Calico Enterprise policies in each tier.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/rbac-tiered-policies) ## Policy for services[​](#policy-for-services) ##### [Apply Calico Enterprise policy to Kubernetes node ports](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/kubernetes-node-ports) -[Restrict access to Kubernetes node ports using Calico Enterprise global network policy. Follow the steps to secure the host, the node ports, and the cluster.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/kubernetes-node-ports) +[Restrict access to Kubernetes NodePort services using a Calico Enterprise GlobalNetworkPolicy at the host endpoint.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/kubernetes-node-ports) ##### [Apply Calico Enterprise policy to services exposed externally as cluster IPs](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/services-cluster-ips) -[Expose Kubernetes service cluster IPs over BGP using Calico Enterprise, and restrict who can access them using Calico Enterprise network policy.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/services-cluster-ips) +[Expose Kubernetes Service ClusterIPs over BGP using Calico Enterprise and restrict who can reach them with network policy.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/services-cluster-ips) ## Policy for extreme traffic[​](#policy-for-extreme-traffic) ##### [Enable extreme high-connection workloads](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/high-connection-workloads) -[Create a Calico network policy rule to bypass Linux conntrack for traffic to workloads that experience extremely large number of connections.](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/high-connection-workloads) +[Bypass Linux conntrack with a Calico Enterprise policy rule for workloads that handle an extreme number of concurrent connections.](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/high-connection-workloads) ##### [Defend against DoS attacks](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/defend-dos-attack) -[Define DoS mitigation rules in Calico Enterprise policy to quickly drop connections when under attack. Learn how rules use eBPF and XDP, including hardware offload when available.](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/defend-dos-attack) +[Define DoS mitigation rules in Calico Enterprise policy that drop connections at the eBPF or XDP layer, with hardware offload when available.](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/defend-dos-attack) ### Policy recommendations @@ -19524,11 +19529,11 @@ Writing network policies is how you restrict traffic to pods in your Kubernetes ## [📄️Enable policy recommendations](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/policy-recommendations) -[Enable continuous policy recommendations to secure unprotected namespaces or workloads.](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/policy-recommendations) +[Run continuous Calico Enterprise policy recommendations so unprotected namespaces and workloads pick up baseline policy automatically.](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/policy-recommendations) ## [📄️Policy recommendations tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/learn-about-policy-recommendations) -[Policy recommendations tutorial.](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/learn-about-policy-recommendations) +[Tutorial walkthrough of the Calico Enterprise policy recommendations engine — what it generates, how to review it, and how to promote it to enforced.](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/learn-about-policy-recommendations) ### Enable policy recommendations @@ -20364,19 +20369,19 @@ Zero trust means that you do not trust anyone or anything. Calico Enterprise han ## [📄️Get started with policy tiers](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/tiered-policy) -[Understand how tiered policy works and supports microsegmentation.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/tiered-policy) +[How tiered policy works in Calico Enterprise — evaluation order, pass actions, and using tiers to enforce microsegmentation across teams.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/tiered-policy) ## [📄️Change allow-tigera tier behavior](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/allow-tigera) -[Understand how to change the behavior of the allow-tigera tier.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/allow-tigera) +[Customize the behavior of the allow-tigera tier that Calico Enterprise installs by default to keep its own components reachable.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/allow-tigera) ## [📄️Network policy tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/policy-tutorial-ui) -[Covers the basics of Calico Enterprise network policy.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/policy-tutorial-ui) +[Tutorial for the Calico Enterprise policy management UI — author, order, and stage policies inside tiers from the web console.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/policy-tutorial-ui) ## [📄️Configure RBAC for tiered policies](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/rbac-tiered-policies) -[Configure RBAC to control access to policies and tiers.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/rbac-tiered-policies) +[Configure Kubernetes RBAC to control which users can edit Calico Enterprise policies in each tier.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/rbac-tiered-policies) ### Get started with policy tiers @@ -22423,19 +22428,19 @@ For help with Pass action rules, see [Get started with tiered policy](https://do ## [📄️Enable a default deny policy for Kubernetes pods](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/kubernetes-default-deny) -[Create a default deny network policy so pods that are missing policy are not allowed traffic until appropriate network policy is defined.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/kubernetes-default-deny) +[Apply a default-deny network policy in a Calico Enterprise cluster so unprotected pods are denied traffic until explicit policy is written.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/kubernetes-default-deny) ## [📄️Get started with Calico network policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-network-policy) -[Create your first Calico network policies. Shows the rich features using sample policies that extend native Kubernetes network policy.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-network-policy) +[Write your first Calico Enterprise NetworkPolicy — sample policies that exercise the rich rule features beyond Kubernetes NetworkPolicy.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-network-policy) ## [📄️Calico Enterprise automatic labels](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-labels) -[Calico Enterprise automatic labels for use with resources.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-labels) +[Reference list of automatic labels Calico Enterprise attaches to resources, useful as selectors in policy rules.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-labels) ## [📄️Calico Enterprise for Kubernetes demo](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/simple-policy-cnx) -[Learn the extra features for Calico Enterprise that make it so important for production environments.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/simple-policy-cnx) +[Tour of the additional features Calico Enterprise adds to Kubernetes policy that make it suitable for production environments.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/simple-policy-cnx) ## [🗃Policy rules](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/) @@ -23492,27 +23497,27 @@ Now, let's enable access to the nginx service using a NetworkPolicy. This will a ## [📄️Basic rules](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/policy-rules-overview) -[Define network connectivity for Calico endpoints using policy rules and label selectors.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/policy-rules-overview) +[How to write policy rules in Calico Enterprise — label selectors, source and destination match criteria, and rule actions.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/policy-rules-overview) ## [📄️Use namespace rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/namespace-policy) -[Use namespaces and namespace selectors in Calico network policy to group or separate resources. Use network policies to allow or deny traffic to/from pods that belong to specific namespaces.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/namespace-policy) +[Group or separate workloads in Calico Enterprise policy using namespaces and namespace selectors so policies apply only to specified namespaces.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/namespace-policy) ## [📄️Use service accounts rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-accounts) -[Use Kubernetes service accounts in policies to validate cryptographic identities and/or manage RBAC controlled high-priority rules across teams.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-accounts) +[Match on Kubernetes service accounts in Calico Enterprise policy rules to validate workload identity and apply RBAC-controlled rules.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-accounts) ## [📄️Use service rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-policy) -[Use Kubernetes Service names in policy rules.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-policy) +[Match on Kubernetes Service names in Calico Enterprise policy rules instead of specific pod selectors.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-policy) ## [📄️Use external IPs or networks rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/external-ips-policy) -[Limit egress and ingress traffic using IP address either directly within Calico network policy or managed as Calico network sets.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/external-ips-policy) +[Restrict egress and ingress to specific IP ranges in Calico Enterprise policy, either inline or via reusable network sets.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/external-ips-policy) ## [📄️Use ICMP/ping rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/icmp-ping) -[Control where ICMP/ping is used by creating a Calico network policy to allow and deny ICMP/ping messages for workloads and host endpoints.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/icmp-ping) +[Allow or deny ICMP and ping traffic for Calico Enterprise workloads and host endpoints using policy rules.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/icmp-ping) ### Basic rules @@ -24315,11 +24320,11 @@ For more on the ICMP match criteria, see: ## [📄️Apply Calico Enterprise policy to Kubernetes node ports](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/kubernetes-node-ports) -[Restrict access to Kubernetes node ports using Calico Enterprise global network policy. Follow the steps to secure the host, the node ports, and the cluster.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/kubernetes-node-ports) +[Restrict access to Kubernetes NodePort services using a Calico Enterprise GlobalNetworkPolicy at the host endpoint.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/kubernetes-node-ports) ## [📄️Apply Calico Enterprise policy to services exposed externally as cluster IPs](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/services-cluster-ips) -[Expose Kubernetes service cluster IPs over BGP using Calico Enterprise, and restrict who can access them using Calico Enterprise network policy.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/services-cluster-ips) +[Expose Kubernetes Service ClusterIPs over BGP using Calico Enterprise and restrict who can reach them with network policy.](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/services-cluster-ips) ### Apply Calico Enterprise policy to Kubernetes node ports @@ -25081,11 +25086,11 @@ For more detail about the relevant resources, see [GlobalNetworkSet](https://doc ## [📄️Enable and enforce application layer policies](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp) -[Enforce application layer policies in your cluster to configure access controls based on L7 attributes.](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp) +[Configure access controls based on Layer-7 attributes by enforcing Calico Enterprise application-layer policy in the cluster.](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp) ## [📄️Application layer policy tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp-tutorial) -[Learn how to apply ALP to your workloads and control ingress traffic.](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp-tutorial) +[Step-by-step tutorial for applying Calico Enterprise application-layer policy to workloads — control ingress traffic by HTTP attributes.](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp-tutorial) ### Enable and enforce application layer policies @@ -25415,15 +25420,15 @@ We omitted the JSON formatting because we do not expect to get a valid JSON resp ## [📄️Determine the best Calico Enterprise/Fortinet solution](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/overview) -[Learn how to integrate Kubernetes clusters with existing Fortinet firewall workflows using Calico Enterprise.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/overview) +[Integrate Kubernetes clusters with existing Fortinet firewall workflows using Calico Enterprise — architecture, components, and what each side enforces.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/overview) ## [📄️Extend Kubernetes to Fortinet firewall devices](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/firewall-integration) -[Enable FortiGate firewalls to control traffic from Kubernetes workloads.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/firewall-integration) +[Use a FortiGate firewall to control egress traffic from Kubernetes workloads in a Calico Enterprise cluster.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/firewall-integration) ## [📄️Extend FortiManager firewall policies to Kubernetes](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration) -[Extend FortiManager firewall policies to Kubernetes with Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration) +[Extend FortiManager firewall policies into Kubernetes workloads in a Calico Enterprise cluster.](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration) ### Determine the best Calico Enterprise/Fortinet solution @@ -25593,7 +25598,7 @@ The basic workflow is: ### Create a config map with FortiGate and FortiManager information[​](#create-a-config-map-with-fortigate-and-fortimanager-information) -1. In the [FortiGate ConfigMap manifest](https://downloads.tigera.io/ee/v3.22.3/manifests/fortinet-device-configmap.yaml), add your FortiGate firewall information in the data section, `tigera.firewall.fortigate`. +1. In the [FortiGate ConfigMap manifest](https://downloads.tigera.io/ee/v3.22.4/manifests/fortinet-device-configmap.yaml), add your FortiGate firewall information in the data section, `tigera.firewall.fortigate`. Where: @@ -25638,7 +25643,7 @@ The basic workflow is: vdom: fortigate-vdom2 ``` -2. In the [FortiManager ConfigMap manifest](https://downloads.tigera.io/ee/v3.22.3/manifests/fortinet-device-configmap.yaml), add your FortiManager information in the data section, `tigera.firewall.fortimgr`. +2. In the [FortiManager ConfigMap manifest](https://downloads.tigera.io/ee/v3.22.4/manifests/fortinet-device-configmap.yaml), add your FortiManager information in the data section, `tigera.firewall.fortimgr`. Where: @@ -25677,7 +25682,7 @@ The basic workflow is: 1. Apply the manifest. ```text - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/fortinet-device-configmap.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/fortinet-device-configmap.yaml ``` ### Install FortiGate ApiKey and FortiManager password as secrets[​](#install-fortigate-apikey-and-fortimanager-password-as-secrets) @@ -25717,7 +25722,7 @@ The basic workflow is: 2. Apply the manifest. ```text - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/fortinet.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/fortinet.yaml ``` ## Verify the integration[​](#verify-the-integration) @@ -25812,7 +25817,7 @@ Create a [Calico Enterprise tier](https://docs.tigera.io/calico-enterprise/lates kubectl create namespace tigera-firewall-controller ``` -2. In this [FortiManager ConfigMap manifest](https://downloads.tigera.io/ee/v3.22.3/manifests/fortimanager-device-configmap.yaml), add your FortiManager device information in the data section: `tigera.firewall.fortimanager-policies`. For example: +2. In this [FortiManager ConfigMap manifest](https://downloads.tigera.io/ee/v3.22.4/manifests/fortimanager-device-configmap.yaml), add your FortiManager device information in the data section: `tigera.firewall.fortimanager-policies`. For example: ```yaml tigera.firewall.fortimanager-policies: | @@ -25855,7 +25860,7 @@ Create a [Calico Enterprise tier](https://docs.tigera.io/calico-enterprise/lates 3. Apply the manifest. ```bash - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/fortimanager-device-configmap.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/fortimanager-device-configmap.yaml ``` ## Install FortiManager password as secrets[​](#install-fortimanager-password-as-secrets) @@ -25887,7 +25892,7 @@ kubectl create secret generic fortimgr-east1 \ 2. Apply the manifest. ```bash - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/fortimanager.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/fortimanager.yaml ``` ## Verify the integration[​](#verify-the-integration) @@ -25904,19 +25909,19 @@ kubectl create secret generic fortimgr-east1 \ ## [📄️Protect hosts and VMs](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts) -[Create Calico Enterprise network policies to restrict traffic to/from hosts.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts) +[Protect Kubernetes hosts and bare-metal nodes with Calico Enterprise policy by writing rules that target host endpoints.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts) ## [📄️Protect Kubernetes nodes](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/kubernetes-nodes) -[Protect Kubernetes nodes with host endpoints managed by Calico Enterprise.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/kubernetes-nodes) +[Protect Kubernetes node interfaces with Calico Enterprise host endpoints to extend network policy to the node itself.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/kubernetes-nodes) ## [📄️Protect hosts tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts-tutorial) -[Learn how to secure incoming traffic from outside the cluster using Calico host endpoints with network policy, including allowing controlled access to specific Kubernetes services.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts-tutorial) +[Tutorial for protecting hosts in a Calico Enterprise cluster — register host endpoints, write rules, and allow controlled access to specific Kubernetes services.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts-tutorial) ## [📄️Apply policy to forwarded traffic](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/host-forwarded-traffic) -[Apply Calico Enterprise network policy to traffic being forward by hosts acting as routers or NAT gateways.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/host-forwarded-traffic) +[Apply Calico Enterprise network policy to traffic forwarded through hosts acting as routers or NAT gateways.](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/host-forwarded-traffic) ### Protect hosts and VMs @@ -27114,11 +27119,11 @@ For preDNAT policies, flow logs display the original destination IP and port bef ## [📄️Enable extreme high-connection workloads](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/high-connection-workloads) -[Create a Calico network policy rule to bypass Linux conntrack for traffic to workloads that experience extremely large number of connections.](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/high-connection-workloads) +[Bypass Linux conntrack with a Calico Enterprise policy rule for workloads that handle an extreme number of concurrent connections.](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/high-connection-workloads) ## [📄️Defend against DoS attacks](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/defend-dos-attack) -[Define DoS mitigation rules in Calico Enterprise policy to quickly drop connections when under attack. Learn how rules use eBPF and XDP, including hardware offload when available.](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/defend-dos-attack) +[Define DoS mitigation rules in Calico Enterprise policy that drop connections at the eBPF or XDP layer, with hardware offload when available.](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/defend-dos-attack) ### Enable extreme high-connection workloads @@ -27378,35 +27383,35 @@ spec: ## [📄️What is network policy?](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-network-policy) -[Learn the basics of Kubernetes and Calico Enterprise network policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-network-policy) +[Concepts you need before writing Calico Enterprise policy — how Kubernetes NetworkPolicy, Calico policy, and tiers interact.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-network-policy) ## [📄️Get started with Kubernetes network policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-network-policy) -[Learn Kubernetes policy syntax, rules, and features for controlling network traffic.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-network-policy) +[Reference for Kubernetes NetworkPolicy syntax, rules, and features when used with the Calico Enterprise enforcement engine.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-network-policy) ## [📄️Kubernetes policy, demo](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-demo) -[An interactive demo that visually shows how applying Kubernetes policy allows and denies connections.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-demo) +[Interactive demo for a Calico Enterprise cluster that visualizes how Kubernetes NetworkPolicy allows and denies connections between pods.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-demo) ## [📄️Kubernetes policy, basic tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-policy-basic) -[Learn how to use basic Kubernetes network policy to securely restrict traffic to/from pods.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-policy-basic) +[Apply your first Kubernetes NetworkPolicy in a Calico Enterprise cluster to restrict ingress and egress traffic to and from pods.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-policy-basic) ## [📄️Kubernetes policy, advanced tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-policy-advanced) -[Learn how to create more advanced Kubernetes network policies (namespace, allow and deny all ingress and egress).](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-policy-advanced) +[Write more advanced Kubernetes NetworkPolicy resources in a Calico Enterprise cluster — namespace scoping, allow-all, and deny-all variants.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-policy-advanced) ## [📄️Kubernetes services](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-services) -[Learn the three main service types and how to use them.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-services) +[How the three Kubernetes Service types behave in a Calico Enterprise cluster and where each one shows up in policy.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-services) ## [📄️Kubernetes ingress](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-ingress) -[Learn the different ingress implementations and how ingress and policy interact.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-ingress) +[How different Kubernetes ingress implementations interact with Calico Enterprise network policy at the cluster edge.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-ingress) ## [📄️Kubernetes egress](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-egress) -[Learn why you should restrict egress traffic and how to do it.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-egress) +[Why egress traffic from Kubernetes workloads matters and how to restrict it with Calico Enterprise policy.](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-egress) ### What is network policy? @@ -32626,11 +32631,11 @@ Follow these steps in the cluster you intend to use as the managed cluster. 1. Install the Tigera Operator and custom resource definitions. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -32648,7 +32653,7 @@ Follow these steps in the cluster you intend to use as the managed cluster. > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. Install your pull secret. @@ -32672,13 +32677,13 @@ Follow these steps in the cluster you intend to use as the managed cluster. 5. (Optional) Compliance and packet capture features are optional. To enable these features during installation, download and review the custom-resources.yaml file. Uncomment the necessary CRs and use this custom-resources.yaml for installation. ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` 6. Download the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` Remove the `Manager` custom resource from the manifest file. @@ -32826,11 +32831,11 @@ kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-us 1. Install the Tigera Operator and custom resource definitions. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -32848,7 +32853,7 @@ kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-us > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. Install your pull secret. @@ -32868,7 +32873,7 @@ kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-us 5. Download the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` Remove the `Manager` custom resource from the manifest file. @@ -33016,11 +33021,11 @@ kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-us 1. Install the Tigera Operator and custom resource definitions. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -33038,7 +33043,7 @@ kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-us > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. Install your pull secret. @@ -33058,7 +33063,7 @@ kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-us 5. Download the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/eks/custom-resources.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/eks/custom-resources.yaml ``` Remove the `Manager` custom resource from the manifest file. @@ -33144,11 +33149,11 @@ Before you get started, make sure you have downloaded and configured the 2. Install the Tigera Operator and custom resource definitions. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 3. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -33166,7 +33171,7 @@ Before you get started, make sure you have downloaded and configured the > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 4. Install your pull secret. @@ -33188,7 +33193,7 @@ Before you get started, make sure you have downloaded and configured the 7. Download the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/eks/custom-resources-calico-cni.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/eks/custom-resources-calico-cni.yaml ``` Remove the `Manager` custom resource from the manifest file. @@ -33350,11 +33355,11 @@ kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-us 1. Install the Tigera Operator and custom resource definitions. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 2. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -33372,7 +33377,7 @@ kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-us > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 3. Install your pull secret. @@ -33392,7 +33397,7 @@ kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-us 5. Download the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/aks/custom-resources.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/aks/custom-resources.yaml ``` Remove the `Manager` custom resource from the manifest file. @@ -33456,11 +33461,11 @@ Wait until the `apiserver` shows a status of `Available`, then proceed to the ne 2. Install the Tigera Operator and custom resource definitions. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` 3. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics. @@ -33478,7 +33483,7 @@ Wait until the `apiserver` shows a status of `Available`, then proceed to the ne > , your Prometheus operator must be v0.40.0 or higher. ```text - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-prometheus-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-prometheus-operator.yaml ``` 4. Install your pull secret. @@ -33498,7 +33503,7 @@ Wait until the `apiserver` shows a status of `Available`, then proceed to the ne 6. Download the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/aks/custom-resources-calico-cni.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/aks/custom-resources-calico-cni.yaml ``` Remove the `Manager` custom resource from the manifest file. @@ -33748,7 +33753,7 @@ Download the Calico Enterprise manifests for OpenShift and add t ```bash mkdir calico -wget -qO- https://downloads.tigera.io/ee/v3.22.3/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico +wget -qO- https://downloads.tigera.io/ee/v3.22.4/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico cp calico/* manifests/ ``` @@ -33802,7 +33807,7 @@ Calico Enterprise requires storage for logs and reports. Before finishin Download the Tigera custom resources. For more information on configuration options available in this manifest, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash -curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/ocp/tigera-enterprise-resources.yaml +curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/ocp/tigera-enterprise-resources.yaml ``` Remove the `Manager` custom resource from the manifest file. @@ -33874,7 +33879,7 @@ Apply the Calico Enterprise manifests for the Prometheus operato > that you manage yourself. ```bash -oc create -f https://downloads.tigera.io/ee/v3.22.3/manifests/ocp/tigera-prometheus-operator.yaml +oc create -f https://downloads.tigera.io/ee/v3.22.4/manifests/ocp/tigera-prometheus-operator.yaml ``` You can now monitor progress with the following command: @@ -33888,7 +33893,7 @@ When it shows all components with status `Available`, proceed to the next step. (Optional) Apply the full CRDs including descriptions. ```bash -oc apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml +oc apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml ``` #### Create the connection manifest for your managed cluster[​](#create-the-connection-manifest-for-your-managed-cluster) @@ -34029,7 +34034,7 @@ helm repo add tigera-ee https://downloads.tigera.io/ee/charts helm repo update -helm pull tigera-ee/tigera-operator --version v3.22.3 +helm pull tigera-ee/tigera-operator --version v3.22.4 ``` ### Prepare the Installation Configuration[​](#prepare-the-installation-configuration) @@ -34161,7 +34166,7 @@ managedClusters: 1. Install the Tigera Operator and custom resource definitions using the Helm 3 chart: ```bash -helm install calico-enterprise tigera-operator-v3.22.3-0.tgz -f values.yaml \ +helm install calico-enterprise tigera-operator-v3.22.4-0.tgz -f values.yaml \ --set-file imagePullSecrets.tigera-pull-secret=,tigera-prometheus-operator.imagePullSecrets.tigera-pull-secret= \ @@ -34273,7 +34278,7 @@ managementCluster: 1. Install the Tigera Operator and custom resource definitions using the Helm 3 chart: ```bash -helm install calico-enterprise tigera-operator-v3.22.3-0.tgz -f values.yaml \ +helm install calico-enterprise tigera-operator-v3.22.4-0.tgz -f values.yaml \ --set-file imagePullSecrets.tigera-pull-secret=,tigera-prometheus-operator.imagePullSecrets.tigera-pull-secret= \ @@ -34379,7 +34384,7 @@ helm repo add tigera-ee https://downloads.tigera.io/ee/charts helm repo update -helm pull tigera-ee/tigera-operator --version v3.22.3 +helm pull tigera-ee/tigera-operator --version v3.22.4 ``` ### Prepare the Installation Configuration[​](#prepare-the-installation-configuration) @@ -34491,7 +34496,7 @@ managementClusterConnection: 1. Install the Tigera Operator and custom resource definitions using the Helm 3 chart: ```bash -helm install calico-enterprise tigera-operator-v3.22.3-0.tgz -f values.yaml \ +helm install calico-enterprise tigera-operator-v3.22.4-0.tgz -f values.yaml \ --set-file imagePullSecrets.tigera-pull-secret=,tigera-prometheus-operator.imagePullSecrets.tigera-pull-secret= \ @@ -34893,7 +34898,7 @@ The steps in this section assume that a management cluster is up and running. 3. Install the Tigera custom resources. For more information, see [the installation reference](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). ```bash - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` 4. Monitor the progress with the following command: @@ -35025,13 +35030,13 @@ In this section, we will create a `kubeconfig` for each cluster. This `kubeconfi 1. Create the ServiceAccount used by remote clusters for authentication: ```bash - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/federation-remote-sa.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/federation-remote-sa.yaml ``` 2. Create the ClusterRole and ClusterRoleBinding used by remote clusters for authorization: ```bash - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/federation-rem-rbac-kdd.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/federation-rem-rbac-kdd.yaml ``` 3. Create the ServiceAccount token that will be used in the `kubeconfig`: @@ -36829,7 +36834,7 @@ In this section we will look at how to add Tor and VPN feeds to Calico Enterpris ```shell - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/threatdef/vpn-feed.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/threatdef/vpn-feed.yaml ``` @@ -36839,7 +36844,7 @@ In this section we will look at how to add Tor and VPN feeds to Calico Enterpris ```shell - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/threatdef/tor-exit-feed.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/threatdef/tor-exit-feed.yaml ``` 2. Now, you can monitor the Dashboard for any malicious activity. The dashboard can be found at the Calico Enterprise web console, go to "kibana" and then go to "Dashboard". Select "Tor-VPN Dashboard". @@ -37713,9 +37718,9 @@ To enable WAF on a Calico Ingress Gateway: To deploy a WAF on multiple gateways, you must create a separate `EnvoyExtensionPolicy` resource for each `Gateway` resource. Each `EnvoyExtensionPolicy` must reference the same `tigera-waf-backend` backend. -3. To verify that the WAF is enabled for your gateway, you can simulate an SQL injection attack through your gateway and see whether it triggers a security event. +3. To verify that the WAF is enabled for your gateway, you can simulate an SQL injection attack through your gateway and check that the WAF logs the request. - > **SECONDARY:** The query string in this example has some SQL syntax embedded in the text. This is harmless and for demo purposes, but WAF will detect this pattern and create an WAF log for this HTTP request. + > **SECONDARY:** The query string in this example has some SQL syntax embedded in the text. This is harmless and for demo purposes, but WAF will detect this pattern and create a WAF log for this HTTP request. By design, Calico Ingress Gateway WAF emits WAF logs rather than security events. 1. Get the service IP of your gateway: @@ -37723,13 +37728,13 @@ To enable WAF on a Calico Ingress Gateway: export GATEWAY_HOST=$(kubectl get gateway/ -o jsonpath='{.status.addresses[0].value}') ``` - 2. Trigger a security event by simulating an SQL injection attack on that service IP: + 2. Simulate an SQL injection attack on that service IP: ```bash curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/?artist=0+div+1+union%23foo*%2F*bar%0D%0Aselect%23foo%0D%0A1%2C2%2Ccurrent_user ``` - 3. From the web console, go to **Threat > Security Events** and check for a security event with corresponds with this simulated SQL injection attack. + 3. In Kibana, select the `tigera_secure_ee_waf*` index pattern and look for a WAF log that corresponds with this request. If blocking mode is enabled, the `curl` command also returns a `403 Forbidden` response. ## Customizing your WAF configuration for an ingress gateway[​](#customizing-your-waf-configuration-for-an-ingress-gateway) @@ -38521,13 +38526,13 @@ To run a report on demand: For management and standalone clusters: ```bash - curl -O https://downloads.tigera.io/ee/v3.22.3/manifests/compliance-reporter-pod.yaml + curl -O https://downloads.tigera.io/ee/v3.22.4/manifests/compliance-reporter-pod.yaml ``` For managed clusters: ```bash - curl https://downloads.tigera.io/ee/v3.22.3/manifests/compliance-reporter-pod-managed.yaml -o compliance-reporter-pod.yaml + curl https://downloads.tigera.io/ee/v3.22.4/manifests/compliance-reporter-pod-managed.yaml -o compliance-reporter-pod.yaml ``` 2. Edit the template as follows: @@ -38716,13 +38721,13 @@ To manually run a report: For management and standalone clusters: ```bash - curl -O https://downloads.tigera.io/ee/v3.22.3/manifests/compliance-reporter-pod.yaml + curl -O https://downloads.tigera.io/ee/v3.22.4/manifests/compliance-reporter-pod.yaml ``` For managed clusters: ```bash - curl https://downloads.tigera.io/ee/v3.22.3/manifests/compliance-reporter-pod-managed.yaml -o compliance-reporter-pod.yaml + curl https://downloads.tigera.io/ee/v3.22.4/manifests/compliance-reporter-pod-managed.yaml -o compliance-reporter-pod.yaml ``` 2. Edit the template as follows: @@ -41705,7 +41710,7 @@ Log into the host, open a terminal prompt, and navigate to the location where yo Use the following command to download the `calicoctl` binary. ```bash -curl -o calicoctl -L https://downloads.tigera.io/ee/binaries/v3.22.3/calicoctl +curl -o calicoctl -L https://downloads.tigera.io/ee/binaries/v3.22.4/calicoctl ``` Set the file to be executable. @@ -41727,13 +41732,13 @@ Use the following commands to download the `calicoctl` binary. - ARM64 (Apple Silicon): ```bash - curl -o calicoctl -L https://downloads.tigera.io/ee/binaries/v3.22.3/calicoctl-darwin-arm64 + curl -o calicoctl -L https://downloads.tigera.io/ee/binaries/v3.22.4/calicoctl-darwin-arm64 ``` - AMD64 (Intel): ```bash - curl -o calicoctl -L https://downloads.tigera.io/ee/binaries/v3.22.3/calicoctl-darwin-amd64 + curl -o calicoctl -L https://downloads.tigera.io/ee/binaries/v3.22.4/calicoctl-darwin-amd64 ``` Set the file to be executable. @@ -41753,7 +41758,7 @@ Use the following PowerShell command to download the `calicoctl` binary. > **SUCCESS:** Consider running PowerShell as administrator and navigating to a location that's in your `PATH`. For example, `C:\Windows`. ```bash -Invoke-WebRequest -Uri "https://downloads.tigera.io/ee/binaries/v3.22.3/calicoctl-windows-amd64.exe" -OutFile "calicoctl.exe" +Invoke-WebRequest -Uri "https://downloads.tigera.io/ee/binaries/v3.22.4/calicoctl-windows-amd64.exe" -OutFile "calicoctl.exe" ``` @@ -41771,7 +41776,7 @@ Log into the host, open a terminal prompt, and navigate to the location where yo Use the following command to download the `calicoctl` binary. ```bash -curl -o kubectl-calico -L https://downloads.tigera.io/ee/binaries/v3.22.3/calicoctl +curl -o kubectl-calico -L https://downloads.tigera.io/ee/binaries/v3.22.4/calicoctl ``` Set the file to be executable. @@ -41793,13 +41798,13 @@ Use the following commands to download the `calicoctl` binary. - ARM64 (Apple Silicon): ```bash - curl -o kubectl-calico -L https://downloads.tigera.io/ee/binaries/v3.22.3/calicoctl-darwin-arm64 + curl -o kubectl-calico -L https://downloads.tigera.io/ee/binaries/v3.22.4/calicoctl-darwin-arm64 ``` - AMD64 (Intel): ```bash - curl -o kubectl-calico -L https://downloads.tigera.io/ee/binaries/v3.22.3/calicoctl-darwin-amd64 + curl -o kubectl-calico -L https://downloads.tigera.io/ee/binaries/v3.22.4/calicoctl-darwin-amd64 ``` Set the file to be executable. @@ -41819,7 +41824,7 @@ Use the following PowerShell command to download the `calicoctl` binary. > **SUCCESS:** Consider running PowerShell as administrator and navigating to a location that's in your `PATH`. For example, `C:\Windows`. ```bash -Invoke-WebRequest -Uri "https://downloads.tigera.io/ee/binaries/v3.22.3/calicoctl-windows-amd64.exe" -OutFile "kubectl-calico.exe" +Invoke-WebRequest -Uri "https://downloads.tigera.io/ee/binaries/v3.22.4/calicoctl-windows-amd64.exe" -OutFile "kubectl-calico.exe" ``` @@ -41873,7 +41878,7 @@ You can now run any `calicoctl` subcommands through `kubectl calico`. 5. Use the following commands to pull the `calicoctl` image from the Tigera registry. ```bash - docker pull quay.io/tigera/calicoctl:v3.22.3 + docker pull quay.io/tigera/calicoctl:v3.22.4 ``` 6. Confirm that the image has loaded by typing `docker images`. @@ -41881,7 +41886,7 @@ You can now run any `calicoctl` subcommands through `kubectl calico`. ```bash REPOSITORY TAG IMAGE ID CREATED SIZE - tigera/calicoctl v3.22.3 e07d59b0eb8a 2 minutes ago 42MB + tigera/calicoctl v3.22.4 e07d59b0eb8a 2 minutes ago 42MB ``` **Next step**: @@ -42078,7 +42083,7 @@ For step-by-step instructions, refer to the section that corresponds to your des 2. Use the following command to download the `calicoq` binary. ```text - curl -o calicoq -O -L https://downloads.tigera.io/ee/binaries/v3.22.3/calicoq + curl -o calicoq -O -L https://downloads.tigera.io/ee/binaries/v3.22.4/calicoq ``` 3. Set the file to be executable. @@ -42132,7 +42137,7 @@ For step-by-step instructions, refer to the section that corresponds to your des 5. Use the following commands to pull the `calicoq` image from the Tigera registry. ```bash - docker pull quay.io/tigera/calicoq:v3.22.3 + docker pull quay.io/tigera/calicoq:v3.22.4 ``` 6. Confirm that the image has loaded by typing `docker images`. @@ -42140,7 +42145,7 @@ For step-by-step instructions, refer to the section that corresponds to your des ```bash REPOSITORY TAG IMAGE ID CREATED SIZE - tigera/calicoq v3.22.3 e07d59b0eb8a 2 minutes ago 42MB + tigera/calicoq v3.22.4 e07d59b0eb8a 2 minutes ago 42MB ``` **Next step**: @@ -43020,7 +43025,7 @@ export NAMESPACE= ``` ```bash -kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/prometheus/elasticsearch-metrics-service-monitor.yaml -n $NAMESPACE +kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/prometheus/elasticsearch-metrics-service-monitor.yaml -n $NAMESPACE ``` The .yamls have no namespace defined so when you apply `kubectl`, it is applied in the $NAMESPACE. @@ -43062,7 +43067,7 @@ export NAMESPACE= ``` ```bash -kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/prometheus/fluentd-metrics-service-monitor.yaml -n $NAMESPACE +kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/prometheus/fluentd-metrics-service-monitor.yaml -n $NAMESPACE ``` The .yamls have no namespace defined so when you apply `kubectl`, it is applied in the $NAMESPACE. @@ -43104,7 +43109,7 @@ export NAMESPACE= ``` ```bash -kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/prometheus/calico-node-monitor-service-monitor.yaml -n $NAMESPACE +kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/prometheus/calico-node-monitor-service-monitor.yaml -n $NAMESPACE ``` The .yamls have no namespace defined so when you apply `kubectl`, it is applied in $NAMESPACE. @@ -43146,7 +43151,7 @@ export NAMESPACE= ``` ```bash -kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/prometheus/kube-controller-metrics-service-monitor.yaml -n $NAMESPACE +kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/prometheus/kube-controller-metrics-service-monitor.yaml -n $NAMESPACE ``` The .yamls have no namespace defined so when you apply `kubectl`, it is applied in the $NAMESPACE. @@ -43230,7 +43235,7 @@ export NAMESPACE= ``` ```bash -kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/prometheus/felix-metrics-service-monitor.yaml -n $NAMESPACE +kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/prometheus/felix-metrics-service-monitor.yaml -n $NAMESPACE ``` The .yamls have no namespace defined so when you apply `kubectl`, it is applied in the $NAMESPACE. @@ -43264,7 +43269,7 @@ export NAMESPACE= ``` ```bash -kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/prometheus/typha-metrics-service-monitor.yaml -n $NAMESPACE +kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/prometheus/typha-metrics-service-monitor.yaml -n $NAMESPACE ``` The .yamls have no namespace defined so when you apply `kubectl`, it is applied in the $NAMESPACE. @@ -44670,7 +44675,7 @@ To add the license-agent component in a Kubernetes cluster for license metrics, 3. Apply the manifest. ```text - kubectl apply -f https://downloads.tigera.io/ee/v3.22.3/manifests/licenseagent.yaml + kubectl apply -f https://downloads.tigera.io/ee/v3.22.4/manifests/licenseagent.yaml ``` ### Create alerts using Prometheus metrics[​](#create-alerts-using-prometheus-metrics) @@ -45682,7 +45687,7 @@ EOF When the main install guide tells you to apply the `custom-resources.yaml`, typically by running `kubectl create` with the URL of the file directly, you should instead download the file, so that you can edit it: ```bash - curl -o custom-resources.yaml https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + curl -o custom-resources.yaml https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` Edit the file in your editor of choice and find the `Installation` resource, which should be at the top of the file. To enable eBPF mode, we need to add a new `calicoNetwork` section inside the `spec` of the Installation resource, including the `linuxDataplane` field. For EKS Bottlerocket OS only, you should also add the `flexVolumePath` setting as shown below. @@ -46400,9 +46405,9 @@ To use nftables, your Kubernetes installation must be configured to use kube-pro 1. Install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/operator-crds.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/operator-crds.yaml - kubectl create -f https://downloads.tigera.io/ee/v3.22.3/manifests/tigera-operator.yaml + kubectl create -f https://downloads.tigera.io/ee/v3.22.4/manifests/tigera-operator.yaml ``` > **SECONDARY:** Due to the large size of the CRD bundle, `kubectl apply` might exceed request limits. Instead, use `kubectl create` or `kubectl replace`. @@ -46412,7 +46417,7 @@ To use nftables, your Kubernetes installation must be configured to use kube-pro 1. Download the default `custom-resources.yaml` file: ```bash - curl -O -L https://downloads.tigera.io/ee/v3.22.3/manifests/custom-resources.yaml + curl -O -L https://downloads.tigera.io/ee/v3.22.4/manifests/custom-resources.yaml ``` 2. Enable nftables mode by setting `spec.linuxDataplane` to `nftables` in the `Installation` resource: @@ -49601,7 +49606,7 @@ EGWDeploymentContainer is a Egress Gateway Deployment container. | Field | Description | | --------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `name` *string* | Name is an enum which identifies the EGW Deployment container by name. Supported values are: calico-egw | +| `name` *string* | Name is an enum which identifies the EGW Deployment container by name. Supported values are: egress-gateway | | `resources` *[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)* | (Optional) Resources allows customization of limits and requests for compute resources such as cpu and memory. If specified, this overrides the named EGW Deployment container's resources. If omitted, the EGW Deployment will use its default value for this container's resources. If used in conjunction with the deprecated ComponentResources, then this value takes precedence. | ### EGWDeploymentInitContainer[​](#egwdeploymentinitcontainer) @@ -50951,7 +50956,7 @@ InstallationSpec defines configuration for a Calico or Calico Enterprise install | `componentResources` *[ComponentResource](#componentresource) array* | (Optional) Deprecated. Please use CalicoNodeDaemonSet, TyphaDeployment, and KubeControllersDeployment. ComponentResources can be used to customize the resource requirements for each component. Node, Typha, and KubeControllers are supported for installations. | | `certificateManagement` *[CertificateManagement](#certificatemanagement)* | (Optional) CertificateManagement configures pods to submit a CertificateSigningRequest to the certificates.k8s.io/v1 API in order to obtain TLS certificates. This feature requires that you bring your own CSR signing and approval process, otherwise pods will be stuck during initialization. | | `tlsCipherSuites` *[TLSCipherSuites](#tlsciphersuites)* | (Optional) TLSCipherSuites defines the cipher suite list that the TLS protocol should use during secure communication. | -| `nonPrivileged` *[NonPrivilegedType](#nonprivilegedtype)* | (Optional) NonPrivileged configures Calico to be run in non-privileged containers as non-root users where possible. | +| `nonPrivileged` *[NonPrivilegedType](#nonprivilegedtype)* | (Optional) Deprecated. NonPrivileged is deprecated and will be removed from the API in a future release. Enabling this field is not supported and will cause errors. NonPrivileged configures Calico to be run in non-privileged containers as non-root users where possible. | | `calicoNodeDaemonSet` *[CalicoNodeDaemonSet](#caliconodedaemonset)* | (Optional) CalicoNodeDaemonSet configures the calico-node DaemonSet. If used in conjunction with the deprecated ComponentResources, then these overrides take precedence. | | `csiNodeDriverDaemonSet` *[CSINodeDriverDaemonSet](#csinodedriverdaemonset)* | (Optional) CSINodeDriverDaemonSet configures the csi-node-driver DaemonSet. | | `calicoKubeControllersDeployment` *[CalicoKubeControllersDeployment](#calicokubecontrollersdeployment)* | (Optional) CalicoKubeControllersDeployment configures the calico-kube-controllers Deployment. If used in conjunction with the deprecated ComponentResources, then these overrides take precedence. | @@ -57337,7 +57342,7 @@ Increasing conntrack limit Running the following command: -docker run --net=host --privileged --name=calico-node -d --restart=always -e ETCD_SCHEME=http -e HOSTNAME=calico -e ETCD_AUTHORITY=127.0.0.1:2379 -e AS= -e NO_DEFAULT_POOLS= -e ETCD_ENDPOINTS= -e IP= -e IP6= -e CALICO_NETWORKING_BACKEND=bird -v /var/run/docker.sock:/var/run/docker.sock -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /var/log/calico:/var/log/calico -v /run/docker/plugins:/run/docker/plugins quay.io/tigera/node:v3.22.3 +docker run --net=host --privileged --name=calico-node -d --restart=always -e ETCD_SCHEME=http -e HOSTNAME=calico -e ETCD_AUTHORITY=127.0.0.1:2379 -e AS= -e NO_DEFAULT_POOLS= -e ETCD_ENDPOINTS= -e IP= -e IP6= -e CALICO_NETWORKING_BACKEND=bird -v /var/run/docker.sock:/var/run/docker.sock -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /var/log/calico:/var/log/calico -v /run/docker/plugins:/run/docker/plugins quay.io/tigera/node:v3.22.4 Waiting for etcd connection... @@ -61204,2743 +61209,4491 @@ spec: | Attribute | Value | | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Key | `prometheusWireGuardMetricsEnabled` | -| Description | Disables WireGuard metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | +| Description | Disables wireguard metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | | Schema | Boolean. | | Default | `true` | #### Data plane: Common[​](#data-plane-common) -##### `allowIPIPPacketsFromWorkloads` +No matching group found for 'Data plane: Common'. -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------- | -| Key | `allowIPIPPacketsFromWorkloads` | -| Description | Controls whether Felix will add a rule to drop IPIP encapsulated traffic from workloads. | -| Schema | Boolean. | -| Default | `false` | - -##### `allowVXLANPacketsFromWorkloads` - -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------- | -| Key | `allowVXLANPacketsFromWorkloads` | -| Description | Controls whether Felix will add a rule to drop VXLAN encapsulated traffic from workloads. | -| Schema | Boolean. | -| Default | `false` | +#### Data plane: iptables[​](#data-plane-iptables) -##### `cgroupV2Path` +No matching group found for 'Data plane: iptables'. -| Attribute | Value | -| ----------- | ------------------------------------------------------------------ | -| Key | `cgroupV2Path` | -| Description | Overrides the default location where to find the cgroup hierarchy. | -| Schema | String. | -| Default | none | +#### Data plane: nftables[​](#data-plane-nftables) -##### `chainInsertMode` +No matching group found for 'Data plane: nftables'. -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `chainInsertMode` | -| Description | Controls whether Felix hooks the kernel's top-level iptables chains by inserting a rule at the top of the chain or by appending a rule at the bottom. insert is the safe default since it prevents Calico's rules from being bypassed. If you switch to append mode, be sure that the other rules in the chains signal acceptance by falling through to the Calico rules, otherwise the Calico policy will be bypassed. | -| Schema | One of: `Append`, `Insert`. | -| Default | `Insert` | +#### Data plane: eBPF[​](#data-plane-ebpf) -##### `dataplaneDriver` +No matching group found for 'Data plane: eBPF'. -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------- | -| Key | `dataplaneDriver` | -| Description | Filename of the external dataplane driver to use. Only used if UseInternalDataplaneDriver is set to false. | -| Schema | String. | -| Default | `calico-iptables-plugin` | +#### Data plane: Windows[​](#data-plane-windows) -##### `dataplaneWatchdogTimeout` +No matching group found for 'Data plane: Windows'. -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `dataplaneWatchdogTimeout` | -| Description | The readiness/liveness timeout used for Felix's (internal) dataplane driver. Deprecated: replaced by the generic HealthTimeoutOverrides. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `1m30s` | +#### Data plane: OpenStack support[​](#data-plane-openstack-support) -##### `defaultEndpointToHostAction` +No matching group found for 'Data plane: OpenStack support'. -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `defaultEndpointToHostAction` | -| Description | Controls what happens to traffic that goes from a workload endpoint to the host itself (after the endpoint's egress policy is applied). By default, Calico blocks traffic from workload endpoints to the host itself with an iptables "DROP" action. If you want to allow some or all traffic from endpoint to host, set this parameter to RETURN or ACCEPT. Use RETURN if you have your own rules in the iptables "INPUT" chain; Calico will insert its rules at the top of that chain, then "RETURN" packets to the "INPUT" chain once it has completed processing workload endpoint egress policy. Use ACCEPT to unconditionally accept packets from workloads after processing workload endpoint egress policy. | -| Schema | One of: `Accept`, `Drop`, `Return`. | -| Default | `Drop` | +#### Data plane: XDP acceleration for iptables data plane[​](#data-plane-xdp-acceleration-for-iptables-data-plane) -##### `deviceRouteProtocol` +No matching group found for 'Data plane: XDP acceleration for iptables data plane'. -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------- | -| Key | `deviceRouteProtocol` | -| Description | Controls the protocol to set on routes programmed by Felix. The protocol is an 8-bit label used to identify the owner of the route. | -| Schema | Integer | -| Default | `3` | +#### Overlay: VXLAN overlay[​](#overlay-vxlan-overlay) -##### `deviceRouteSourceAddress` +##### `vxlanEnabled` -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `deviceRouteSourceAddress` | -| Description | IPv4 address to set as the source hint for routes programmed by Felix. When not set the source address for local traffic from host to workload will be determined by the kernel. | -| Schema | String. | -| Default | none | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `vxlanEnabled` | +| Description | Overrides whether Felix should create the VXLAN tunnel device for IPv4 VXLAN networking. Optional as Felix determines this based on the existing IP pools. | +| Schema | Boolean. | +| Default | none | -##### `deviceRouteSourceAddressIPv6` +##### `vxlanMTU` -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `deviceRouteSourceAddressIPv6` | -| Description | IPv6 address to set as the source hint for routes programmed by Felix. When not set the source address for local traffic from host to workload will be determined by the kernel. | -| Schema | String. | -| Default | none | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------- | +| Key | `vxlanMTU` | +| Description | The MTU to set on the IPv4 VXLAN tunnel device. Optional as Felix auto-detects the MTU based on the MTU of the host's interfaces. | +| Schema | Integer | +| Default | `0` | -##### `disableConntrackInvalidCheck` +##### `vxlanMTUV6` -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `disableConntrackInvalidCheck` | -| Description | Disables the check for invalid connections in conntrack. While the conntrack invalid check helps to detect malicious traffic, it can also cause issues with certain multi-NIC scenarios. | -| Schema | Boolean. | -| Default | `false` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------- | +| Key | `vxlanMTUV6` | +| Description | The MTU to set on the IPv6 VXLAN tunnel device. Optional as Felix auto-detects the MTU based on the MTU of the host's interfaces. | +| Schema | Integer | +| Default | `0` | -##### `dropActionOverride` +##### `vxlanPort` -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `dropActionOverride` | -| Description | Overrides the Drop action in Felix, optionally changing the behavior to Accept, and optionally adding Log. Possible values are Drop, LogAndDrop, Accept, LogAndAccept. | -| Schema | One of: `Accept`, `Drop`, `LogAndAccept`, `LogAndDrop`. | -| Default | `Drop` | +| Attribute | Value | +| ----------- | --------------------------------------------- | +| Key | `vxlanPort` | +| Description | The UDP port number to use for VXLAN traffic. | +| Schema | Integer | +| Default | `4789` | -##### `endpointStatusPathPrefix` +##### `vxlanVNI` -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `endpointStatusPathPrefix` | -| Description | The path to the directory where endpoint status will be written. Endpoint status file reporting is disabled if field is left empty.Chosen directory should match the directory used by the CNI plugin for PodStartupDelay. | -| Schema | String. | -| Default | `/var/run/calico` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------ | +| Key | `vxlanVNI` | +| Description | The VXLAN VNI to use for VXLAN traffic. You may need to change this if the default value is in use on your system. | +| Schema | Integer | +| Default | `4096` | -##### `externalNodesList` +#### Overlay: IP-in-IP[​](#overlay-ip-in-ip) -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `externalNodesList` | -| Description | A list of CIDR's of external, non-Calico nodes from which VXLAN/IPIP overlay traffic will be allowed. By default, external tunneled traffic is blocked to reduce attack surface. | -| Schema | List of strings: `["", ...]`. | -| Default | none | +##### `ipipEnabled` -##### `failsafeInboundHostPorts` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `ipipEnabled` | +| Description | Overrides whether Felix should configure an IPIP interface on the host. Optional as Felix determines this based on the existing IP pools. | +| Schema | Boolean. | +| Default | none | -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `failsafeInboundHostPorts` | -| Description | A list of ProtoPort struct objects including UDP/TCP/SCTP ports and CIDRs that Felix will allow incoming traffic to host endpoints on irrespective of the security policy. This is useful to avoid accidentally cutting off a host with incorrect configuration. For backwards compatibility, if the protocol is not specified, it defaults to "tcp". If a CIDR is not specified, it will allow traffic from all addresses. To disable all inbound host ports, use the value "\[]". The default value allows ssh access, DHCP, BGP, etcd and the Kubernetes API. | -| Schema | List of protocol/port objects with optional CIDR match: `[{protocol: "TCP\|UDP", port: , net: ""}, ...]`. | -| Default | `[{"protocol":"tcp","port":22},{"protocol":"udp","port":68},{"protocol":"tcp","port":179},{"protocol":"tcp","port":2379},{"protocol":"tcp","port":2380},{"protocol":"tcp","port":5473},{"protocol":"tcp","port":6443},{"protocol":"tcp","port":6666},{"protocol":"tcp","port":6667}]` | +##### `ipipMTU` -##### `failsafeOutboundHostPorts` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `ipipMTU` | +| Description | Controls the MTU to set on the IPIP tunnel device. Optional as Felix auto-detects the MTU based on the MTU of the host's interfaces. | +| Schema | Integer | +| Default | `0` | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `failsafeOutboundHostPorts` | -| Description | A list of PortProto struct objects including UDP/TCP/SCTP ports and CIDRs that Felix will allow outgoing traffic from host endpoints to irrespective of the security policy. This is useful to avoid accidentally cutting off a host with incorrect configuration. For backwards compatibility, if the protocol is not specified, it defaults to "tcp". If a CIDR is not specified, it will allow traffic from all addresses. To disable all outbound host ports, use the value "\[]". The default value opens etcd's standard ports to ensure that Felix does not get cut off from etcd as well as allowing DHCP, DNS, BGP and the Kubernetes API. | -| Schema | List of protocol/port objects with optional CIDR match: `[{protocol: "TCP\|UDP", port: , net: ""}, ...]`. | -| Default | `[{"protocol":"udp","port":53},{"protocol":"udp","port":67},{"protocol":"tcp","port":179},{"protocol":"tcp","port":2379},{"protocol":"tcp","port":2380},{"protocol":"tcp","port":5473},{"protocol":"tcp","port":6443},{"protocol":"tcp","port":6666},{"protocol":"tcp","port":6667}]` | +#### Overlay: WireGuard[​](#overlay-wireguard) -##### `floatingIPs` +No matching group found for 'Overlay: WireGuard'. -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `floatingIPs` | -| Description | Configures whether or not Felix will program non-OpenStack floating IP addresses. (OpenStack-derived floating IPs are always programmed, regardless of this setting.) | -| Schema | One of: `"Disabled"`, `"Enabled"`. | -| Default | `Disabled` | +#### Overlay: IPSec[​](#overlay-ipsec) -##### `ipForwarding` +##### `ipsecAllowUnsecuredTraffic` -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `ipForwarding` | -| Description | Controls whether Felix sets the host sysctls to enable IP forwarding. IP forwarding is required when using Calico for workload networking. This should be disabled only on hosts where Calico is used solely for host protection. In BPF mode, due to a kernel interaction, either IPForwarding must be enabled or BPFEnforceRPF must be disabled. | -| Schema | One of: `"Disabled"`, `"Enabled"`. | -| Default | `Enabled` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `ipsecAllowUnsecuredTraffic` | +| Description | Controls whether non-IPsec traffic is allowed in addition to IPsec traffic. Enabling this negates the anti-spoofing protections of IPsec but it is useful when migrating to/from IPsec. | +| Schema | Boolean. | +| Default | `false` | -##### `interfaceExclude` +##### `ipsecESPAlgorithm` -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `interfaceExclude` | -| Description | A comma-separated list of interface names that should be excluded when Felix is resolving host endpoints. The default value ensures that Felix ignores Kubernetes' internal `kube-ipvs0` device. If you want to exclude multiple interface names using a single value, the list supports regular expressions. For regular expressions you must wrap the value with `/`. For example having values `/^kube/,veth1` will exclude all interfaces that begin with `kube` and also the interface `veth1`. | -| Schema | String. | -| Default | `kube-ipvs0` | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------- | +| Key | `ipsecESPAlgorithm` | +| Description | IPSecESAlgorithm sets IPSec ESP algorithm. Default is NIST suite B recommendation. | +| Schema | String. | +| Default | `aes128gcm16-ecp256` | -##### `interfacePrefix` +##### `ipsecIKEAlgorithm` -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `interfacePrefix` | -| Description | The interface name prefix that identifies workload endpoints and so distinguishes them from host endpoint interfaces. Note: in environments other than bare metal, the orchestrators configure this appropriately. For example our Kubernetes and Docker integrations set the 'cali' value, and our OpenStack integration sets the 'tap' value. | -| Schema | String. | -| Default | `cali` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------- | +| Key | `ipsecIKEAlgorithm` | +| Description | Sets IPSec IKE algorithm. Default is NIST suite B recommendation. | +| Schema | String. | +| Default | `aes128gcm16-prfsha256-ecp256` | -##### `interfaceRefreshInterval` +##### `ipsecLogLevel` -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------- | -| Key | `interfaceRefreshInterval` | -| Description | The period at which Felix rescans local interfaces to verify their state. The rescan can be disabled by setting the interval to 0. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `1m30s` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `ipsecLogLevel` | +| Description | Controls log level for IPSec components. Set to None for no logging. A generic log level terminology is used \[None, Notice, Info, Debug, Verbose]. | +| Schema | One of: `Debug`, `Info`, `None`, `Notice`, `Verbose`. | +| Default | `Info` | -##### `ipv6Support` +##### `ipsecMode` -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------- | -| Key | `ipv6Support` | -| Description | Controls whether Felix enables support for IPv6 (if supported by the in-use dataplane). | -| Schema | Boolean. | -| Default | `true` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------ | +| Key | `ipsecMode` | +| Description | Controls which mode IPSec is operating on. Default value means IPSec is not enabled. | +| Schema | String. | +| Default | none | -##### `istioAmbientMode` +##### `ipsecPolicyRefreshInterval` -| Attribute | Value | -| ----------- | ------------------------------------------------------------------- | -| Key | `istioAmbientMode` | -| Description | Configures Felix to work together with Tigera's Istio distribution. | -| Schema | One of: `"Disabled"`, `"Enabled"`. | -| Default | `Disabled` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------- | +| Key | `ipsecPolicyRefreshInterval` | +| Description | The interval at which Felix will check the kernel's IPsec policy tables and repair any inconsistencies. | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `10m0s` | -##### `istioDSCPMark` +#### Flow logs: Prometheus reports[​](#flow-logs-prometheus-reports) -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `istioDSCPMark` | -| Description | Sets the value to use when directing traffic to Istio ZTunnel, when Istio is enabled. The mark is set only on SYN packets at the final hop to avoid interference with other protocols. This value is reserved by Calico and must not be used with other Istio installation. | -| Schema | String. | -| Default | none | +##### `deletedMetricsRetentionSecs` -##### `kubeMasqueradeBit` +| Attribute | Value | +| ----------- | -------------------------------------------------------------- | +| Key | `deletedMetricsRetentionSecs` | +| Description | Controls how long metrics are retianed after the flow is gone. | +| Schema | Integer. | +| Default | `30s` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `kubeMasqueradeBit` | -| Description | Should be set to the same value as --iptables-masquerade-bit of kube-proxy when TPROXY is used. The default is the same as kube-proxy default thus only needs a change if kube-proxy is using a non-standard setting. Must be within the range of 0-31. | -| Schema | Integer | -| Default | `14` | +##### `prometheusReporterCAFile` -##### `mtuIfacePattern` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------- | +| Key | `prometheusReporterCAFile` | +| Description | The path to the TLS CA file for the Prometheus per-flow metrics reporter. | +| Schema | String. | +| Default | none | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `mtuIfacePattern` | -| Description | A regular expression that controls which interfaces Felix should scan in order to calculate the host's MTU. This should not match workload interfaces (usually named cali...). | -| Schema | String. | -| Default | `^((en\|wl\|ww\|sl\|ib)[Pcopsvx].*\|(eth\|wlan\|wwan).*)` | +##### `prometheusReporterCertFile` -##### `natOutgoingAddress` +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------- | +| Key | `prometheusReporterCertFile` | +| Description | The path to the TLS certificate file for the Prometheus per-flow metrics reporter. | +| Schema | String. | +| Default | none | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `natOutgoingAddress` | -| Description | Specifies an address to use when performing source NAT for traffic in a natOutgoing pool that is leaving the network. By default the address used is an address on the interface the traffic is leaving on (i.e. it uses the iptables MASQUERADE target). | -| Schema | String. | -| Default | none | +##### `prometheusReporterEnabled` -##### `natOutgoingExclusions` +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `prometheusReporterEnabled` | +| Description | Controls whether the Prometheus per-flow metrics reporter is enabled. This is used to show real-time flow metrics in the UI. | +| Schema | Boolean. | +| Default | `false` | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `natOutgoingExclusions` | -| Description | When a IP pool setting `natOutgoing` is true, packets sent from Calico networked containers in this IP pool to destinations will be masqueraded. Configure which type of destinations is excluded from being masqueraded. - IPPoolsOnly: destinations outside of this IP pool will be masqueraded. - IPPoolsAndHostIPs: destinations outside of this IP pool and all hosts will be masqueraded. | -| Schema | One of: `"IPPoolsAndHostIPs"`, `"IPPoolsOnly"`. | -| Default | `IPPoolsOnly` | +##### `prometheusReporterKeyFile` -##### `natPortRange` +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------- | +| Key | `prometheusReporterKeyFile` | +| Description | The path to the TLS private key file for the Prometheus per-flow metrics reporter. | +| Schema | String. | +| Default | none | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `natPortRange` | -| Description | Specifies the range of ports that is used for port mapping when doing outgoing NAT. When unset the default behavior of the network stack is used. | -| Schema | String. | -| Default | `0` | +##### `prometheusReporterPort` -##### `nftablesDNSPolicyMode` +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------- | +| Key | `prometheusReporterPort` | +| Description | The port that the Prometheus per-flow metrics reporter should bind to. | +| Schema | Integer: \[0,65535] | +| Default | `9092` | -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `nftablesDNSPolicyMode` | -| Description | Specifies how DNS policy programming will be handled for NFTables. DelayDeniedPacket - Felix delays any denied packet that traversed a policy that included egress domain matches, but did not match. The packet is released after a fixed time, or after the destination IP address was programmed. DelayDNSResponse - Felix delays any DNS response until related IPSets are programmed. This introduces some latency to all DNS packets (even when no IPSet programming is required), but it ensures policy hit statistics are accurate. This is the recommended setting when you are making use of staged policies or policy rule hit statistics. NoDelay - Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs. | -| Schema | One of: `"DelayDNSResponse"`, `"DelayDeniedPacket"`, `"NoDelay"`. | -| Default | `DelayDeniedPacket` | +#### Flow logs: Syslog reports[​](#flow-logs-syslog-reports) -##### `nftablesMode` +##### `syslogReporterAddress` -| Attribute | Value | -| ----------- | -------------------------------------------- | -| Key | `nftablesMode` | -| Description | Configures nftables support in Felix. | -| Schema | One of: `"Auto"`, `"Disabled"`, `"Enabled"`. | -| Default | `Disabled` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `syslogReporterAddress` | +| Description | The address to dial to when writing to Syslog. For TCP and UDP networks, the address has the form "host:port". The host must be a literal IP address, or a host name that can be resolved to IP addresses. The port must be a literal port number or a service name. For more, see: https\://pkg.go.dev/net#Dial. | +| Schema | String. | +| Default | none | -##### `netlinkTimeout` +##### `syslogReporterEnabled` -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------- | -| Key | `netlinkTimeout` | -| Description | The timeout when talking to the kernel over the netlink protocol, used for programming routes, rules, and other kernel objects. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `10s` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `syslogReporterEnabled` | +| Description | Turns on the feature to write logs to Syslog. Please note that this can incur significant disk space usage when running felix on non-cluster hosts. | +| Schema | Boolean. | +| Default | `false` | -##### `nfNetlinkBufSize` +##### `syslogReporterNetwork` -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `nfNetlinkBufSize` | -| Description | Controls the size of NFLOG messages that the kernel will try to send to Felix. NFLOG messages are used to report flow verdicts from the kernel. Warning: currently increasing the value may cause errors due to a bug in the netlink library. | -| Schema | String. | -| Default | `65536` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `syslogReporterNetwork` | +| Description | The network to dial to when writing to Syslog. Known networks are "tcp", "tcp4" (IPv4-only), "tcp6" (IPv6-only), "udp", "udp4" (IPv4-only), "udp6" (IPv6-only), "ip", "ip4" (IPv4-only), "ip6" (IPv6-only), "unix", "unixgram" and "unixpacket". For more, see: https\://pkg.go.dev/net#Dial. | +| Schema | String. | +| Default | none | -##### `policySyncPathPrefix` +#### Flow logs: file reports[​](#flow-logs-file-reports) -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------- | -| Key | `policySyncPathPrefix` | -| Description | Used to by Felix to communicate policy changes to external services, like Application layer policy. | -| Schema | String. | -| Default | none | +##### `flowLogsAggregationThresholdBytes` -##### `programClusterRoutes` +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsAggregationThresholdBytes` | +| Description | Used specify how far behind the external pipeline that reads flow logs can be. Default is 8192 bytes. This parameter only takes effect when FlowLogsDynamicAggregationEnabled is set to true. | +| Schema | Integer | +| Default | `8192` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------- | -| Key | `programClusterRoutes` | -| Description | Specifies whether Felix should program IPIP routes instead of BIRD. Felix always programs VXLAN routes. | -| Schema | One of: `"Disabled"`, `"Enabled"`. | -| Default | `Disabled` | +##### `flowLogsCollectProcessInfo` -##### `removeExternalRoutes` +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------- | +| Key | `flowLogsCollectProcessInfo` | +| Description | If enabled Felix will load the kprobe BPF programs to collect process info. | +| Schema | Boolean. | +| Default | `false` | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `removeExternalRoutes` | -| Description | Controls whether Felix will remove unexpected routes to workload interfaces. Felix will always clean up expected routes that use the configured DeviceRouteProtocol. To add your own routes, you must use a distinct protocol (in addition to setting this field to false). | -| Schema | Boolean. | -| Default | `true` | +##### `flowLogsCollectProcessPath` -##### `requireMTUFile` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsCollectProcessPath` | +| Description | When FlowLogsCollectProcessPath and FlowLogsCollectProcessInfo are both enabled, each flow log will include information about the process that is sending or receiving the packets in that flow: the `process_name` field will contain the full path of the process executable, and the `process_args` field will have the arguments with which the executable was invoked. Process information will not be reported for connections which use raw sockets. | +| Schema | Boolean. | +| Default | `false` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------- | -| Key | `requireMTUFile` | -| Description | Specifies whether mtu file is required to start the felix. Optional as to keep the same as previous behavior. | -| Schema | Boolean. | -| Default | `false` | +##### `flowLogsCollectTcpStats` -##### `routeRefreshInterval` +| Attribute | Value | +| ----------- | --------------------------------------------- | +| Key | `flowLogsCollectTcpStats` | +| Description | Enables flow logs reporting TCP socket stats. | +| Schema | Boolean. | +| Default | `false` | -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `routeRefreshInterval` | -| Description | The period at which Felix re-checks the routes in the dataplane to ensure that no other process has accidentally broken Calico's rules. Set to 0 to disable route refresh. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `1m30s` | +##### `flowLogsCollectorDebugTrace` -##### `routeSource` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsCollectorDebugTrace` | +| Description | When FlowLogsCollectorDebugTrace is set to true, enables the logs in the collector to be printed in their entirety. | +| Schema | Boolean. | +| Default | `false` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `routeSource` | -| Description | Configures where Felix gets its routing information. - WorkloadIPs: use workload endpoints to construct routes. - CalicoIPAM: the default - use IPAM data to construct routes. | -| Schema | One of: `CalicoIPAM`, `WorkloadIPs`. | -| Default | `CalicoIPAM` | +##### `flowLogsDestDomainsByClient` -##### `routeSyncDisabled` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------- | +| Key | `flowLogsDestDomainsByClient` | +| Description | Used to configure if the source IP is used in the mapping of top level destination domains. | +| Schema | Boolean. | +| Default | `true` | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------- | -| Key | `routeSyncDisabled` | -| Description | Will disable all operations performed on the route table. Set to true to run in network-policy mode only. | -| Schema | Boolean. | -| Default | `false` | +##### `flowLogsDynamicAggregationEnabled` -##### `routeTableRange` +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------- | +| Key | `flowLogsDynamicAggregationEnabled` | +| Description | Used to enable/disable dynamically changing aggregation levels. Default is true. | +| Schema | Boolean. | +| Default | `false` | -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `routeTableRange` | -| Description | Deprecated in favor of RouteTableRanges. Calico programs additional Linux route tables for various purposes. RouteTableRange specifies the indices of the route tables that Calico should use. | -| Schema | Route table range: `{min:, max}`. | -| Default | none | +##### `flowLogsEnableHostEndpoint` -##### `routeTableRanges` +| Attribute | Value | +| ----------- | ---------------------------------------------- | +| Key | `flowLogsEnableHostEndpoint` | +| Description | Enables Flow logs reporting for HostEndpoints. | +| Schema | Boolean. | +| Default | `false` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `routeTableRanges` | -| Description | Calico programs additional Linux route tables for various purposes. RouteTableRanges specifies a set of table index ranges that Calico should use. Deprecates`RouteTableRange`, overrides `RouteTableRange`. | -| Schema | List of route table ranges: `[{min:, max}, ...]`. | -| Default | none | +##### `flowLogsEnableNetworkSets` -##### `serviceLoopPrevention` +| Attribute | Value | +| ----------- | -------------------------------------------------- | +| Key | `flowLogsEnableNetworkSets` | +| Description | Enables Flow logs reporting for GlobalNetworkSets. | +| Schema | Boolean. | +| Default | `false` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `serviceLoopPrevention` | -| Description | When service IP advertisement is enabled, prevent routing loops to service IPs that are not in use, by dropping or rejecting packets that do not get DNAT'd by kube-proxy. Unless set to "Disabled", in which case such routing loops continue to be allowed. | -| Schema | One of: `Disabled`, `Drop`, `Reject`. | -| Default | `Drop` | +##### `flowLogsFileAggregationKindForAllowed` -##### `sidecarAccelerationEnabled` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsFileAggregationKindForAllowed` | +| Description | Used to choose the type of aggregation for flow log entries created for allowed connections. . Accepted values are 0, 1 and 2. 0 - No aggregation. 1 - Source port based aggregation. 2 - Pod prefix name based aggreagation. | +| Schema | One of: `0`, `1`, `2`. | +| Default | `2` | -| Attribute | Value | -| ----------- | ------------------------------------------ | -| Key | `sidecarAccelerationEnabled` | -| Description | Enables experimental sidecar acceleration. | -| Schema | Boolean. | -| Default | `false` | +##### `flowLogsFileAggregationKindForDenied` -##### `useInternalDataplaneDriver` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsFileAggregationKindForDenied` | +| Description | Used to choose the type of aggregation for flow log entries created for denied connections. . Accepted values are 0, 1 and 2. 0 - No aggregation. 1 - Source port based aggregation. 2 - Pod prefix name based aggregation. 3 - No destination ports based aggregation. | +| Schema | One of: `0`, `1`, `2`, `3`. | +| Default | `1` | -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `useInternalDataplaneDriver` | -| Description | If true, Felix will use its internal dataplane programming logic. If false, it will launch an external dataplane driver and communicate with it over protobuf. | -| Schema | Boolean. | -| Default | `true` | +##### `flowLogsFileDirectory` -##### `wafEventLogsFileDirectory` +| Attribute | Value | +| ----------- | ---------------------------------------------------- | +| Key | `flowLogsFileDirectory` | +| Description | Sets the directory where flow logs files are stored. | +| Schema | String. | +| Default | `/var/log/calico/flowlogs` | -| Attribute | Value | -| ----------- | ------------------------------------------------------- | -| Key | `wafEventLogsFileDirectory` | -| Description | Sets the directory where WAFEvent log files are stored. | -| Schema | String. | -| Default | `/var/log/calico/waf` | +##### `flowLogsFileDomainsLimit` -##### `wafEventLogsFileEnabled` +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsFileDomainsLimit` | +| Description | Used to configure the number of (destination) domains to include in the flow log. These are not included for workload or host endpoint destinations. | +| Schema | Integer | +| Default | `5` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------ | -| Key | `wafEventLogsFileEnabled` | -| Description | Controls logging WAFEvent logs to a file. If false no WAFEvent logging to file will occur. | -| Schema | Boolean. | -| Default | `false` | +##### `flowLogsFileEnabled` -##### `wafEventLogsFileMaxFileSizeMB` +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------- | +| Key | `flowLogsFileEnabled` | +| Description | When set to true, enables logging flow logs to a file. If false no flow logging to file will occur. | +| Schema | Boolean. | +| Default | `false` | -| Attribute | Value | -| ----------- | -------------------------------------------------------------- | -| Key | `wafEventLogsFileMaxFileSizeMB` | -| Description | Sets the max size in MB of WAFEvent log files before rotation. | -| Schema | Integer | -| Default | `100` | +##### `flowLogsFileEnabledForAllowed` -##### `wafEventLogsFileMaxFiles` +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsFileEnabledForAllowed` | +| Description | Used to enable/disable flow logs entries created for allowed connections. Default is true. This parameter only takes effect when FlowLogsFileReporterEnabled is set to true. | +| Schema | Boolean. | +| Default | `true` | -| Attribute | Value | -| ----------- | ---------------------------------------------- | -| Key | `wafEventLogsFileMaxFiles` | -| Description | Sets the number of WAFEvent log files to keep. | -| Schema | Integer | -| Default | `5` | +##### `flowLogsFileEnabledForDenied` -##### `wafEventLogsFlushInterval` +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsFileEnabledForDenied` | +| Description | Used to enable/disable flow logs entries created for denied flows. Default is true. This parameter only takes effect when FlowLogsFileReporterEnabled is set to true. | +| Schema | Boolean. | +| Default | `true` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------- | -| Key | `wafEventLogsFlushInterval` | -| Description | Configures the interval at which Felix exports WAFEvent logs. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `15s` | +##### `flowLogsFileIncludeLabels` -##### `workloadSourceSpoofing` +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------- | +| Key | `flowLogsFileIncludeLabels` | +| Description | Used to configure if endpoint labels are included in a Flow log entry written to file. | +| Schema | Boolean. | +| Default | `false` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `workloadSourceSpoofing` | -| Description | Controls whether pods can use the allowedSourcePrefixes annotation to send traffic with a source IP address that is not theirs. This is disabled by default. When set to "Any", pods can request any prefix. | -| Schema | One of: `Any`, `Disabled`. | -| Default | `Disabled` | +##### `flowLogsFileIncludePolicies` -#### Data plane: iptables[​](#data-plane-iptables) +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------- | +| Key | `flowLogsFileIncludePolicies` | +| Description | Used to configure if policy information are included in a Flow log entry written to file. | +| Schema | Boolean. | +| Default | `false` | -##### `ipsetsRefreshInterval` +##### `flowLogsFileIncludeService` -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------- | -| Key | `ipsetsRefreshInterval` | -| Description | Controls the period at which Felix re-checks all IP sets to look for discrepancies. Set to 0 to disable the periodic refresh. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `1m30s` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsFileIncludeService` | +| Description | Used to configure if the destination service is included in a Flow log entry written to file. The service information can only be included if the flow was explicitly determined to be directed at the service (e.g. when the pre-DNAT destination corresponds to the service ClusterIP and port). | +| Schema | Boolean. | +| Default | `false` | -##### `iptablesBackend` +##### `flowLogsFileMaxFileSizeMB` -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `iptablesBackend` | -| Description | Controls which backend of iptables will be used. The default is `Auto`.Warning: changing this on a running system can leave "orphaned" rules in the "other" backend. These should be cleaned up to avoid confusing interactions. | -| Schema | One of: `Auto`, `Legacy`, `NFT`. | -| Default | `Auto` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------- | +| Key | `flowLogsFileMaxFileSizeMB` | +| Description | Sets the max size in MB of flow logs files before rotation. | +| Schema | Integer | +| Default | `100` | -##### `iptablesFilterAllowAction` +##### `flowLogsFileMaxFiles` -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `iptablesFilterAllowAction` | -| Description | Controls what happens to traffic that is accepted by a Felix policy chain in the iptables filter table (which is used for "normal" policy). The default will immediately `Accept` the traffic. Use `Return` to send the traffic back up to the system chains for further processing. | -| Schema | One of: `Accept`, `Return`. | -| Default | `Accept` | +| Attribute | Value | +| ----------- | ------------------------------------- | +| Key | `flowLogsFileMaxFiles` | +| Description | Sets the number of log files to keep. | +| Schema | Integer | +| Default | `5` | -##### `iptablesFilterDenyAction` +##### `flowLogsFileNatOutgoingPortLimit` -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `iptablesFilterDenyAction` | -| Description | Controls what happens to traffic that is denied by network policy. By default Calico blocks traffic with an iptables "DROP" action. If you want to use "REJECT" action instead you can configure it in here. | -| Schema | One of: `Drop`, `Reject`. | -| Default | `Drop` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsFileNatOutgoingPortLimit` | +| Description | Used to specify the maximum number of distinct post SNAT ports that will appear in the flowLogs. Default value is 3. | +| Schema | Integer | +| Default | `3` | -##### `iptablesLockFilePath` +##### `flowLogsFilePerFlowProcessArgsLimit` -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `iptablesLockFilePath` | -| Description | The location of the iptables lock file. You may need to change this if the lock file is not in its standard location (for example if you have mapped it into Felix's container at a different path). | -| Schema | String. | -| Default | `/run/xtables.lock` | - -##### `iptablesLockProbeInterval` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsFilePerFlowProcessArgsLimit` | +| Description | Used to specify the maximum number of distinct process args that will appear in the flowLogs. Default value is 5. | +| Schema | Integer | +| Default | `5` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `iptablesLockProbeInterval` | -| Description | When IptablesLockTimeout is enabled: the time that Felix will wait between attempts to acquire the iptables lock if it is not available. Lower values make Felix more responsive when the lock is contended, but use more CPU. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `50ms` | +##### `flowLogsFilePerFlowProcessLimit` -##### `iptablesLockTimeout` +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsFilePerFlowProcessLimit` | +| Description | Used to specify the maximum number of flow log entries with distinct process information beyond which process information will be aggregated. | +| Schema | Integer | +| Default | `2` | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `iptablesLockTimeout` | -| Description | The time that Felix itself will wait for the iptables lock (rather than delegating the lock handling to the `iptables` command).Deprecated: `iptables-restore` v1.8+ always takes the lock, so enabling this feature results in deadlock. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `0s` | +##### `flowLogsFlushInterval` -##### `iptablesMangleAllowAction` +| Attribute | Value | +| ----------- | --------------------------------------------------------- | +| Key | `flowLogsFlushInterval` | +| Description | Configures the interval at which Felix exports flow logs. | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `5m0s` | -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `iptablesMangleAllowAction` | -| Description | Controls what happens to traffic that is accepted by a Felix policy chain in the iptables mangle table (which is used for "pre-DNAT" policy). The default will immediately `Accept` the traffic. Use `Return` to send the traffic back up to the system chains for further processing. | -| Schema | One of: `Accept`, `Return`. | -| Default | `Accept` | +##### `flowLogsGoldmaneServer` -##### `iptablesMarkMask` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------- | +| Key | `flowLogsGoldmaneServer` | +| Description | FlowLogGoldmaneServer is the flow server endpoint to which flow data should be published. | +| Schema | String. | +| Default | none | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `iptablesMarkMask` | -| Description | The mask that Felix selects its IPTables Mark bits from. Should be a 32 bit hexadecimal number with at least 8 bits set, none of which clash with any other mark bits in use on the system. | -| Schema | Unsigned 32-bit integer. | -| Default | `0xffff0000` | +##### `flowLogsLocalReporter` -##### `iptablesNATOutgoingInterfaceFilter` +| Attribute | Value | +| ----------- | -------------------------------------------------------------------- | +| Key | `flowLogsLocalReporter` | +| Description | Configures local unix socket for reporting flow data from each node. | +| Schema | One of: `"Disabled"`, `"Enabled"`. | +| Default | `Disabled` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `iptablesNATOutgoingInterfaceFilter` | -| Description | This parameter can be used to limit the host interfaces on which Calico will apply SNAT to traffic leaving a Calico IPAM pool with "NAT outgoing" enabled. This can be useful if you have a main data interface, where traffic should be SNATted and a secondary device (such as the docker bridge) which is local to the host and doesn't require SNAT. This parameter uses the iptables interface matching syntax, which allows + as a wildcard. Most users will not need to set this. Example: if your data interfaces are eth0 and eth1 and you want to exclude the docker bridge, you could set this to eth+. | -| Schema | String. | -| Default | none | +##### `flowLogsMaxOriginalIPsIncluded` -##### `iptablesPostWriteCheckInterval` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------- | +| Key | `flowLogsMaxOriginalIPsIncluded` | +| Description | Specifies the number of unique IP addresses (if relevant) that should be included in Flow logs. | +| Schema | Integer | +| Default | `50` | -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `iptablesPostWriteCheckInterval` | -| Description | The period after Felix has done a write to the dataplane that it schedules an extra read back in order to check the write was not clobbered by another process. This should only occur if another application on the system doesn't respect the iptables lock. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `5s` | +##### `flowLogsPolicyEvaluationMode` -##### `iptablesRefreshInterval` +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsPolicyEvaluationMode` | +| Description | Defines how policies are evaluated and reflected in flow logs. OnNewConnection - In this mode, staged policies are only evaluated when new connections are made in the dataplane. Staged/active policy changes will not be reflected in the `pending_policies` field of flow logs for long lived connections. Continuous - Felix evaluates active flows on a regular basis to determine the rule traces in the flow logs. Any policy updates that impact a flow will be reflected in the pending\_policies field, offering a near-real-time view of policy changes across flows. | +| Schema | String. | +| Default | `Continuous` | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `iptablesRefreshInterval` | -| Description | The period at which Felix re-checks the IP sets in the dataplane to ensure that no other process has accidentally broken Calico's rules. Set to 0 to disable IP sets refresh. Note: the default for this value is lower than the other refresh intervals as a workaround for a Linux kernel bug that was fixed in kernel version 4.11. If you are using v4.11 or greater you may want to set this to, a higher value to reduce Felix CPU usage. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `3m0s` | +##### `flowLogsPolicyScope` -##### `kubeNodePortRanges` +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsPolicyScope` | +| Description | Controls which policies are included in flow logs. AllPolicies - Processes both transit policies for the local node and endpoint policies derived from packet source/destination IPs. Provides comprehensive visibility into all policy evaluations but increases log volume. EndpointPolicies - Processes only policies for endpoints identified as the source or destination of the packet (whether workload or host endpoints). | +| Schema | String. | +| Default | `EndpointPolicies` | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `kubeNodePortRanges` | -| Description | Holds list of port ranges used for service node ports. Only used if felix detects kube-proxy running in ipvs mode. Felix uses these ranges to separate host and workload traffic. . | -| Schema | List of ports: `[, ...]` where `` is a port number (integer) or range (string), for example `80`, `8080:8089`. | -| Default | `["30000:32767"]` | +##### `flowLogsPositionFilePath` -##### `maxIpsetSize` +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `flowLogsPositionFilePath` | +| Description | Used specify the position of the external pipeline that reads flow logs. Default is /var/log/calico/flows.log.pos. This parameter only takes effect when FlowLogsDynamicAggregationEnabled is set to true. | +| Schema | String. | +| Default | `/var/log/calico/flows.log.pos` | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------- | -| Key | `maxIpsetSize` | -| Description | The maximum number of IP addresses that can be stored in an IP set. Not applicable if using the nftables backend. | -| Schema | Integer | -| Default | `1048576` | +#### DNS logs / policy[​](#dns-logs--policy) -#### Data plane: nftables[​](#data-plane-nftables) +##### `dnsCacheEpoch` -##### `nftablesFilterAllowAction` +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------- | +| Key | `dnsCacheEpoch` | +| Description | An arbitrary number that can be changed, at runtime, to tell Felix to discard all its learnt DNS information. . | +| Schema | Integer | +| Default | `0` | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `nftablesFilterAllowAction` | -| Description | Controls the nftables action that Felix uses to represent the "allow" policy verdict in the filter table. The default is to `ACCEPT` the traffic, which is a terminal action. Alternatively, `RETURN` can be used to return the traffic back to the top-level chain for further processing by your rules. | -| Schema | One of: `Accept`, `Return`. | -| Default | `Accept` | +##### `dnsCacheFile` -##### `nftablesFilterDenyAction` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------ | +| Key | `dnsCacheFile` | +| Description | The name of the file that Felix uses to preserve learnt DNS information when restarting. . | +| Schema | String. | +| Default | `/var/run/calico/felix-dns-cache.txt` | -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `nftablesFilterDenyAction` | -| Description | Controls what happens to traffic that is denied by network policy. By default, Calico blocks traffic with a "drop" action. If you want to use a "reject" action instead you can configure it here. | -| Schema | One of: `Drop`, `Reject`. | -| Default | `Drop` | +##### `dnsCacheSaveInterval` -##### `nftablesMangleAllowAction` +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------- | +| Key | `dnsCacheSaveInterval` | +| Description | The periodic interval at which Felix saves learnt DNS information to the cache file. . | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `1m0s` | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `nftablesMangleAllowAction` | -| Description | Controls the nftables action that Felix uses to represent the "allow" policy verdict in the mangle table. The default is to `ACCEPT` the traffic, which is a terminal action. Alternatively, `RETURN` can be used to return the traffic back to the top-level chain for further processing by your rules. | -| Schema | One of: `Accept`, `Return`. | -| Default | `Accept` | +##### `dnsExtraTTL` -##### `nftablesMarkMask` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------- | +| Key | `dnsExtraTTL` | +| Description | Extra time to keep IPs and alias names that are learnt from DNS, in addition to each name or IP's advertised TTL. . | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `0s` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `nftablesMarkMask` | -| Description | The mask that Felix selects its nftables Mark bits from. Should be a 32 bit hexadecimal number with at least 8 bits set, none of which clash with any other mark bits in use on the system. | -| Schema | Unsigned 32-bit integer. | -| Default | `0xffff0000` | +##### `dnsLogsFileAggregationKind` -##### `nftablesRefreshInterval` +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `dnsLogsFileAggregationKind` | +| Description | Used to choose the type of aggregation for DNS log entries. . Accepted values are 0 and 1. 0 - No aggregation. 1 - Aggregate over clients with the same name prefix. | +| Schema | One of: `0`, `1`. | +| Default | `1` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------- | -| Key | `nftablesRefreshInterval` | -| Description | Controls the interval at which Felix periodically refreshes the nftables rules. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `3m0s` | +##### `dnsLogsFileDirectory` -#### Data plane: eBPF[​](#data-plane-ebpf) +| Attribute | Value | +| ----------- | -------------------------------------------------- | +| Key | `dnsLogsFileDirectory` | +| Description | Sets the directory where DNS log files are stored. | +| Schema | String. | +| Default | `/var/log/calico/dnslogs` | -##### `bpfAttachType` +##### `dnsLogsFileEnabled` -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfAttachType` | -| Description | Controls how are the BPF programs at the network interfaces attached. By default `TCX` is used where available to enable easier coexistence with 3rd party programs. `TC` can force the legacy method of attaching via a qdisc. `TCX` falls back to `TC` if `TCX` is not available. | -| Schema | One of: `"TC"`, `"TCX"`. | -| Default | `TCX` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------- | +| Key | `dnsLogsFileEnabled` | +| Description | Controls logging DNS logs to a file. If false no DNS logging to file will occur. | +| Schema | Boolean. | +| Default | `false` | -##### `bpfCTLBLogFilter` +##### `dnsLogsFileIncludeLabels` -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfCTLBLogFilter` | -| Description | Specifies, what is logged by connect time load balancer when BPFLogLevel is debug. Currently has to be specified as 'all' when BPFLogFilters is set to see CTLB logs. | -| Schema | String. | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------- | +| Key | `dnsLogsFileIncludeLabels` | +| Description | Used to configure if endpoint labels are included in a DNS log entry written to file. | +| Schema | Boolean. | +| Default | `true` | -##### `bpfConnectTimeLoadBalancing` +##### `dnsLogsFileMaxFileSizeMB` -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfConnectTimeLoadBalancing` | -| Description | When in BPF mode, controls whether Felix installs the connect-time load balancer. The connect-time load balancer is required for the host to be able to reach Kubernetes services and it improves the performance of pod-to-service connections.When set to TCP, connect time load balancing is available only for services with TCP ports. | -| Schema | One of: `"Disabled"`, `"Enabled"`, `"TCP"`. | -| Default | `TCP` | +| Attribute | Value | +| ----------- | --------------------------------------------------------- | +| Key | `dnsLogsFileMaxFileSizeMB` | +| Description | Sets the max size in MB of DNS log files before rotation. | +| Schema | Integer | +| Default | `100` | -##### `bpfConnectTimeLoadBalancingEnabled` +##### `dnsLogsFileMaxFiles` -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfConnectTimeLoadBalancingEnabled` | -| Description | When in BPF mode, controls whether Felix installs the connection-time load balancer. The connect-time load balancer is required for the host to be able to reach Kubernetes services and it improves the performance of pod-to-service connections. The only reason to disable it is for debugging purposes.Deprecated: Use BPFConnectTimeLoadBalancing. | -| Schema | Boolean. | -| Default | none | +| Attribute | Value | +| ----------- | ----------------------------------------- | +| Key | `dnsLogsFileMaxFiles` | +| Description | Sets the number of DNS log files to keep. | +| Schema | Integer | +| Default | `5` | -##### `bpfConntrackMode` +##### `dnsLogsFilePerNodeLimit` -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfConntrackMode` | -| Description | Controls how BPF conntrack entries are cleaned up. `Auto` will use a BPF program if supported, falling back to userspace if not. `Userspace` will always use the userspace cleanup code. `BPFProgram` will always use the BPF program (failing if not supported)./To be deprecated in future versions as conntrack map type changed to lru\_hash and userspace cleanup is the only mode that is supported. | -| Schema | One of: `"Auto"`, `"BPFProgram"`, `"Userspace"`. | -| Default | `Auto` | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `dnsLogsFilePerNodeLimit` | +| Description | Limit on the number of DNS logs that can be emitted within each flush interval. When this limit has been reached, Felix counts the number of unloggable DNS responses within the flush interval, and emits a WARNING log with that count at the same time as it flushes the buffered DNS logs. | +| Schema | Integer | +| Default | `0` | -##### `bpfConntrackLogLevel` +##### `dnsLogsFlushInterval` -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfConntrackLogLevel` | -| Description | Controls the log level of the BPF conntrack cleanup program, which runs periodically to clean up expired BPF conntrack entries. . | -| Schema | One of: `"Debug"`, `"Off"`. | -| Default | `Off` | +| Attribute | Value | +| ----------- | -------------------------------------------------------- | +| Key | `dnsLogsFlushInterval` | +| Description | Configures the interval at which Felix exports DNS logs. | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `5m0s` | -##### `bpfConntrackTimeouts` +##### `dnsLogsLatency` -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `bpfConntrackTimeouts` | -| Description | BPFConntrackTimers overrides the default values for the specified conntrack timer if set. Each value can be either a duration or `Auto` to pick the value from a Linux conntrack timeout.Configurable timers are: CreationGracePeriod, TCPSynSent, TCPEstablished, TCPFinsSeen, TCPResetSeen, UDPTimeout, GenericTimeout, ICMPTimeout.Unset values are replaced by the default values with a warning log for incorrect values. | -| Schema | | -| Default | none | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------- | +| Key | `dnsLogsLatency` | +| Description | Indicates to include measurements of DNS request/response latency in each DNS log. | +| Schema | Boolean. | +| Default | `true` | -##### `bpfDNSPolicyMode` +##### `dnsPacketsNfqueueID` -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfDNSPolicyMode` | -| Description | Specifies how DNS policy programming will be handled. Inline - BPF parses DNS response inline with DNS response packet processing. This guarantees the DNS rules reflect any change immediately. NoDelay - Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs. | -| Schema | | -| Default | `Inline` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `dnsPacketsNfqueueID` | +| Description | The NFQUEUE ID to use for capturing DNS packets to ensure programming IPSets occurs before the response is released. Used when DNSPolicyMode is DelayDNSResponse. | +| Schema | Integer | +| Default | `101` | -##### `bpfDSROptoutCIDRs` +##### `dnsPacketsNfqueueMaxHoldDuration` -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfDSROptoutCIDRs` | -| Description | A list of CIDRs which are excluded from DSR. That is, clients in those CIDRs will access service node ports as if BPFExternalServiceMode was set to Tunnel. | -| Schema | List of CIDRs: `["", ...]`. | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `dnsPacketsNfqueueMaxHoldDuration` | +| Description | The max length of time to hold on to a DNS response while waiting for the the dataplane to be programmed. Used when DNSPolicyMode is DelayDNSResponse. | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `3s` | -##### `bpfDataIfacePattern` +##### `dnsPacketsNfqueueSize` -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfDataIfacePattern` | -| Description | A regular expression that controls which interfaces Felix should attach BPF programs to in order to catch traffic to/from the network. This needs to match the interfaces that Calico workload traffic flows over as well as any interfaces that handle incoming traffic to nodeports and services from outside the cluster. It should not match the workload interfaces (usually named cali...) or any other special device managed by Calico itself (e.g., tunnels). | -| Schema | String. | -| Default | `^((en\|wl\|ww\|sl\|ib)[Popsx].*\|(eth\|wlan\|wwan\|bond).*)` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `dnsPacketsNfqueueSize` | +| Description | The size of the NFQUEUE for captured DNS packets. This is the maximum number of DNS packets that may be queued awaiting programming in the dataplane. Used when DNSPolicyMode is DelayDNSResponse. | +| Schema | Integer | +| Default | `100` | -##### `bpfDisableGROForIfaces` +##### `dnsPolicyMode` -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfDisableGROForIfaces` | -| Description | A regular expression that controls which interfaces Felix should disable the Generic Receive Offload \[GRO] option. It should not match the workload interfaces (usually named cali...). | -| Schema | String. | -| Default | none | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `dnsPolicyMode` | +| Description | Specifies how DNS policy programming will be handled. DelayDeniedPacket - Felix delays any denied packet that traversed a policy that included egress domain matches, but did not match. The packet is released after a fixed time, or after the destination IP address was programmed. DelayDNSResponse - Felix delays any DNS response until related IPSets are programmed. This introduces some latency to all DNS packets (even when no IPSet programming is required), but it ensures policy hit statistics are accurate. This is the recommended setting when you are making use of staged policies or policy rule hit statistics. NoDelay - Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs.Inline - Parses DNS response inline with DNS response packet processing within IPTables. This guarantees the DNS rules reflect any change immediately. This mode works for iptables only and matches the same mode for BPFDNSPolicyMode. This setting is ignored on Windows and "NoDelay" is always used.This setting is ignored by eBPF and BPFDNSPolicyMode is used instead.This field has no effect in NFTables mode. Please use NFTablesDNSPolicyMode instead. | +| Schema | One of: `"DelayDNSResponse"`, `"DelayDeniedPacket"`, `"Inline"`, `"NoDelay"`. | +| Default | `DelayDeniedPacket` | -##### `bpfDisableUnprivileged` +##### `dnsPolicyNfqueueID` -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfDisableUnprivileged` | -| Description | If enabled, Felix sets the kernel.unprivileged\_bpf\_disabled sysctl to disable unprivileged use of BPF. This ensures that unprivileged users cannot access Calico's BPF maps and cannot insert their own BPF programs to interfere with Calico's. | -| Schema | Boolean. | -| Default | `true` | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `dnsPolicyNfqueueID` | +| Description | The NFQUEUE ID to use for DNS Policy re-evaluation when the domains IP hasn't been programmed to ipsets yet. Used when DNSPolicyMode is DelayDeniedPacket. | +| Schema | Integer | +| Default | `100` | -##### `bpfEnabled` +##### `dnsPolicyNfqueueSize` -| Attribute | Value | -| ----------- | -------------------------------------------- | -| Key | `bpfEnabled` | -| Description | If enabled Felix will use the BPF dataplane. | -| Schema | Boolean. | -| Default | `false` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `dnsPolicyNfqueueSize` | +| Description | DNSPolicyNfqueueID is the size of the NFQUEUE for DNS policy re-evaluation. This is the maximum number of denied packets that may be queued up pending re-evaluation. Used when DNSPolicyMode is DelayDeniedPacket. | +| Schema | Integer | +| Default | `255` | -##### `bpfEnforceRPF` +##### `dnsTrustedServers` -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfEnforceRPF` | -| Description | Enforce strict RPF on all host interfaces with BPF programs regardless of what is the per-interfaces or global setting. Possible values are Disabled, Strict or Loose. | -| Schema | One of: `Disabled`, `Loose`, `Strict`. | -| Default | `Loose` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `dnsTrustedServers` | +| Description | The DNS servers that Felix should trust. Each entry here must be `[:]` - indicating an explicit DNS server IP - or `k8s-service:[/][:port]` - indicating a Kubernetes DNS service. `` defaults to the first service port, or 53 for an IP, and `` to `kube-system`. An IPv6 address with a port must use the square brackets convention, for example `[fd00:83a6::12]:5353`.Note that Felix (calico-node) will need RBAC permission to read the details of each service specified by a `k8s-service:...` form. . | +| Schema | List of strings: `["", ...]`. | +| Default | none | -##### `bpfExcludeCIDRsFromNAT` +#### L7 logs[​](#l7-logs) -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `bpfExcludeCIDRsFromNAT` | -| Description | A list of CIDRs that are to be excluded from NAT resolution so that host can handle them. A typical usecase is node local DNS cache. | -| Schema | List of CIDRs: `["", ...]`. | -| Default | none | +##### `l7LogsFileAggregationDestinationInfo` -##### `bpfExportBufferSizeMB` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `l7LogsFileAggregationDestinationInfo` | +| Description | Used to choose the type of aggregation for the destination metadata on L7 log entries. . Accepted values are IncludeL7DestinationInfo and ExcludeL7DestinationInfo. IncludeL7DestinationInfo - Include destination metadata in the logs. ExcludeL7DestinationInfo - Aggregate over all other fields ignoring the destination aggregated name, namespace, and type. | +| Schema | One of: `ExcludeL7DestinationInfo`, `IncludeL7DestinationInfo`. | +| Default | `IncludeL7DestinationInfo` | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------- | -| Key | `bpfExportBufferSizeMB` | -| Description | In BPF mode, controls the buffer size used for sending BPF events to felix. | -| Schema | Integer | -| Default | `1` | +##### `l7LogsFileAggregationHTTPHeaderInfo` -##### `bpfExtToServiceConnmark` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `l7LogsFileAggregationHTTPHeaderInfo` | +| Description | Used to choose the type of aggregation for HTTP header data on L7 log entries. . Accepted values are IncludeL7HTTPHeaderInfo and ExcludeL7HTTPHeaderInfo. IncludeL7HTTPHeaderInfo - Include HTTP header data in the logs. ExcludeL7HTTPHeaderInfo - Aggregate over all other fields ignoring the user agent and log type. | +| Schema | One of: `ExcludeL7HTTPHeaderInfo`, `IncludeL7HTTPHeaderInfo`. | +| Default | `ExcludeL7HTTPHeaderInfo` | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfExtToServiceConnmark` | -| Description | In BPF mode, controls a 32bit mark that is set on connections from an external client to a local service. This mark allows us to control how packets of that connection are routed within the host and how is routing interpreted by RPF check. | -| Schema | Integer | -| Default | `0` | +##### `l7LogsFileAggregationHTTPMethod` -##### `bpfExternalServiceMode` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `l7LogsFileAggregationHTTPMethod` | +| Description | Used to choose the type of aggregation for the HTTP request method on L7 log entries. . Accepted values are IncludeL7HTTPMethod and ExcludeL7HTTPMethod. IncludeL7HTTPMethod - Include HTTP method in the logs. ExcludeL7HTTPMethod - Aggregate over all other fields ignoring the HTTP method. | +| Schema | One of: `ExcludeL7HTTPMethod`, `IncludeL7HTTPMethod`. | +| Default | `IncludeL7HTTPMethod` | -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfExternalServiceMode` | -| Description | In BPF mode, controls how connections from outside the cluster to services (node ports and cluster IPs) are forwarded to remote workloads. If set to "Tunnel" then both request and response traffic is tunneled to the remote node. If set to "DSR", the request traffic is tunneled but the response traffic is sent directly from the remote node. In "DSR" mode, the remote node appears to use the IP of the ingress node; this requires a permissive L2 network. | -| Schema | One of: `DSR`, `Tunnel`. | -| Default | `Tunnel` | +##### `l7LogsFileAggregationNumURLPath` -##### `bpfForceTrackPacketsFromIfaces` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `l7LogsFileAggregationNumURLPath` | +| Description | Used to choose the number of components in the url path to display. This allows for the url to be truncated in case parts of the path provide no value. Setting this value to negative will allow all parts of the path to be displayed. . | +| Schema | Integer | +| Default | `5` | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfForceTrackPacketsFromIfaces` | -| Description | In BPF mode, forces traffic from these interfaces to skip Calico's iptables NOTRACK rule, allowing traffic from those interfaces to be tracked by Linux conntrack. Should only be used for interfaces that are not used for the Calico fabric. For example, a docker bridge device for non-Calico-networked containers. | -| Schema | List of interface names (may use `+` as a wildcard: `["", ...]`. | -| Default | `["docker+"]` | +##### `l7LogsFileAggregationResponseCode` -##### `bpfHostConntrackBypass` +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `l7LogsFileAggregationResponseCode` | +| Description | Used to choose the type of aggregation for the response code on L7 log entries. . Accepted values are IncludeL7ResponseCode and ExcludeL7ResponseCode. IncludeL7ResponseCode - Include the response code in the logs. ExcludeL7ResponseCode - Aggregate over all other fields ignoring the response code. | +| Schema | One of: `ExcludeL7ResponseCode`, `IncludeL7ResponseCode`. | +| Default | `IncludeL7ResponseCode` | -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------- | -| Key | `bpfHostConntrackBypass` | -| Description | Controls whether to bypass Linux conntrack in BPF mode for workloads and services. | -| Schema | Boolean. | -| Default | `false` | +##### `l7LogsFileAggregationServiceInfo` -##### `bpfHostNetworkedNATWithoutCTLB` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `l7LogsFileAggregationServiceInfo` | +| Description | Used to choose the type of aggregation for the service data on L7 log entries. . Accepted values are IncludeL7ServiceInfo and ExcludeL7ServiceInfo. IncludeL7ServiceInfo - Include service data in the logs. ExcludeL7ServiceInfo - Aggregate over all other fields ignoring the service name, namespace, and port. | +| Schema | One of: `ExcludeL7ServiceInfo`, `IncludeL7ServiceInfo`. | +| Default | `IncludeL7ServiceInfo` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfHostNetworkedNATWithoutCTLB` | -| Description | When in BPF mode, controls whether Felix does a NAT without CTLB. This along with BPFConnectTimeLoadBalancing determines the CTLB behavior. | -| Schema | | -| Default | `Enabled` | +##### `l7LogsFileAggregationSourceInfo` -##### `bpfKubeProxyEndpointSlicesEnabled` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `l7LogsFileAggregationSourceInfo` | +| Description | L7LogsFileAggregationExcludeSourceInfo is used to choose the type of aggregation for the source metadata on L7 log entries. . Accepted values are IncludeL7SourceInfo, IncludeL7SourceInfoNoPort, and ExcludeL7SourceInfo. IncludeL7SourceInfo - Include source metadata in the logs. IncludeL7SourceInfoNoPort - Include source metadata in the logs excluding the source port. ExcludeL7SourceInfo - Aggregate over all other fields ignoring the source aggregated name, namespace, and type. | +| Schema | One of: `ExcludeL7SourceInfo`, `IncludeL7SourceInfo`, `IncludeL7SourceInfoNoPort`. | +| Default | `IncludeL7SourceInfoNoPort` | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfKubeProxyEndpointSlicesEnabled` | -| Description | Deprecated and has no effect. BPF kube-proxy always accepts endpoint slices. This option will be removed in the next release. | -| Schema | Boolean. | -| Default | `true` | +##### `l7LogsFileAggregationTrimURL` -##### `bpfKubeProxyIptablesCleanupEnabled` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `l7LogsFileAggregationTrimURL` | +| Description | Used to choose the type of aggregation for the url on L7 log entries. . Accepted values: IncludeL7FullURL - Include the full URL up to however many path components are allowed by L7LogsFileAggregationNumURLPath. TrimURLQuery - Aggregate over all other fields ignoring the query parameters on the URL. TrimURLQueryAndPath - Aggregate over all other fields and the base URL only. ExcludeL7URL - Aggregate over all other fields ignoring the URL entirely. | +| Schema | One of: `ExcludeL7URL`, `IncludeL7FullURL`, `TrimURLQuery`, `TrimURLQueryAndPath`. | +| Default | `IncludeL7FullURL` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `bpfKubeProxyIptablesCleanupEnabled` | -| Description | If enabled in BPF mode, Felix will proactively clean up the upstream Kubernetes kube-proxy's iptables chains. Should only be enabled if kube-proxy is not running. | -| Schema | Boolean. | -| Default | `true` | +##### `l7LogsFileAggregationURLCharLimit` -##### `bpfKubeProxyMinSyncPeriod` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `l7LogsFileAggregationURLCharLimit` | +| Description | Limit on the length of the URL collected in L7 logs. When a URL length reaches this limit it is sliced off, and the sliced URL is sent to log storage. | +| Schema | Integer | +| Default | `250` | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfKubeProxyMinSyncPeriod` | -| Description | In BPF mode, controls the minimum time between updates to the dataplane for Felix's embedded kube-proxy. Lower values give reduced set-up latency. Higher values reduce Felix CPU usage by batching up more work. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `1s` | +##### `l7LogsFileDirectory` -##### `bpfL3IfacePattern` +| Attribute | Value | +| ----------- | ------------------------------------------------- | +| Key | `l7LogsFileDirectory` | +| Description | Sets the directory where L7 log files are stored. | +| Schema | String. | +| Default | `/var/log/calico/l7logs` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfL3IfacePattern` | -| Description | A regular expression that allows to list tunnel devices like WireGuard or vxlan (i.e., L3 devices) in addition to BPFDataIfacePattern. That is, tunnel interfaces not created by Calico, that Calico workload traffic flows over as well as any interfaces that handle incoming traffic to nodeports and services from outside the cluster. | -| Schema | String. | -| Default | none | +##### `l7LogsFileEnabled` -##### `bpfLogFilters` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------ | +| Key | `l7LogsFileEnabled` | +| Description | Controls logging L7 logs to a file. If false no L7 logging to file will occur. | +| Schema | Boolean. | +| Default | `true` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfLogFilters` | -| Description | A map of key=values where the value is a pcap filter expression and the key is an interface name with 'all' denoting all interfaces, 'weps' all workload endpoints and 'heps' all host endpoints.When specified as an env var, it accepts a comma-separated list of key=values. | -| Schema | | -| Default | none | +##### `l7LogsFileMaxFileSizeMB` -##### `bpfLogLevel` +| Attribute | Value | +| ----------- | -------------------------------------------------------- | +| Key | `l7LogsFileMaxFileSizeMB` | +| Description | Sets the max size in MB of L7 log files before rotation. | +| Schema | Integer | +| Default | `100` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfLogLevel` | -| Description | Controls the log level of the BPF programs when in BPF dataplane mode. One of "Off", "Info", or "Debug". The logs are emitted to the BPF trace pipe, accessible with the command `tc exec bpf debug`. . | -| Schema | One of: `Debug`, `Info`, `Off`. | -| Default | `Off` | +##### `l7LogsFileMaxFiles` -##### `bpfMapSizeConntrack` +| Attribute | Value | +| ----------- | ---------------------------------------- | +| Key | `l7LogsFileMaxFiles` | +| Description | Sets the number of L7 log files to keep. | +| Schema | Integer | +| Default | `5` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfMapSizeConntrack` | -| Description | Sets the size for the conntrack map. This map must be large enough to hold an entry for each active connection. Warning: changing the size of the conntrack map can cause disruption. | -| Schema | Integer | -| Default | `512000` | +##### `l7LogsFilePerNodeLimit` -##### `bpfMapSizeConntrackCleanupQueue` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `l7LogsFilePerNodeLimit` | +| Description | Limit on the number of L7 logs that can be emitted within each flush interval. When this limit has been reached, Felix counts the number of unloggable L7 responses within the flush interval, and emits a WARNING log with that count at the same time as it flushes the buffered L7 logs. A value of 0 means no limit. | +| Schema | Integer | +| Default | `1500` | -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfMapSizeConntrackCleanupQueue` | -| Description | Sets the size for the map used to hold NAT conntrack entries that are queued for cleanup. This should be big enough to hold all the NAT entries that expire within one cleanup interval. | -| Schema | Integer | -| Default | `100000` | +##### `l7LogsFlushInterval` -##### `bpfMapSizeConntrackScaling` +| Attribute | Value | +| ----------- | ------------------------------------------------------- | +| Key | `l7LogsFlushInterval` | +| Description | Configures the interval at which Felix exports L7 logs. | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `5m0s` | -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfMapSizeConntrackScaling` | -| Description | Controls whether and how we scale the conntrack map size depending on its usage. 'Disabled' make the size stay at the default or whatever is set by BPFMapSizeConntrack\*. 'DoubleIfFull' doubles the size when the map is pretty much full even after cleanups. | -| Schema | One of: `Disabled`, `DoubleIfFull`. | -| Default | `DoubleIfFull` | +#### AWS integration[​](#aws-integration) -##### `bpfMapSizeIPSets` +##### `awsRequestTimeout` -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfMapSizeIPSets` | -| Description | Sets the size for ipsets map. The IP sets map must be large enough to hold an entry for each endpoint matched by every selector in the source/destination matches in network policy. Selectors such as "all()" can result in large numbers of entries (one entry per endpoint in that case). | -| Schema | Integer | -| Default | `1048576` | +| Attribute | Value | +| ----------- | ---------------------------------------------------- | +| Key | `awsRequestTimeout` | +| Description | The timeout on AWS API requests. | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `30s` | -##### `bpfMapSizeIfState` +##### `awsSecondaryIPRoutingRulePriority` -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfMapSizeIfState` | -| Description | Sets the size for ifstate map. The ifstate map must be large enough to hold an entry for each device (host + workloads) on a host. | -| Schema | Integer | -| Default | `1000` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------- | +| Key | `awsSecondaryIPRoutingRulePriority` | +| Description | Controls the priority that Felix will use for routing rules when programming them for AWS Secondary IP support. | +| Schema | Integer: \[0,4294967295] | +| Default | `101` | -##### `bpfMapSizeNATAffinity` +##### `awsSecondaryIPSupport` -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------- | -| Key | `bpfMapSizeNATAffinity` | -| Description | Sets the size of the BPF map that stores the affinity of a connection (for services that enable that feature. | -| Schema | Integer | -| Default | `65536` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `awsSecondaryIPSupport` | +| Description | Controls whether Felix will try to provision AWS secondary ENIs for workloads that have IPs from IP pools that are configured with an AWS subnet ID. If the field is set to "EnabledENIPerWorkload" then each workload with an AWS-backed IP will be assigned its own secondary ENI. If set to "Enabled" then each workload with an AWS-backed IP pool will be allocated a secondary IP address on a secondary ENI; this mode requires additional IP pools to be provisioned for the host to claim IPs for the primary IP of the secondary ENIs. Accepted value must be one of "Enabled", "EnabledENIPerWorkload" or "Disabled". | +| Schema | One of: `Disabled`, `Enabled`, `EnabledENIPerWorkload`. | +| Default | `Disabled` | -##### `bpfMapSizeNATBackend` +##### `awsSrcDstCheck` -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfMapSizeNATBackend` | -| Description | Sets the size for NAT back end map. This is the total number of endpoints. This is mostly more than the size of the number of services. | -| Schema | Integer | -| Default | `262144` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `awsSrcDstCheck` | +| Description | Controls whether Felix will try to change the "source/dest check" setting on the EC2 instance on which it is running. A value of "Disable" will try to disable the source/dest check. Disabling the check allows for sending workload traffic without encapsulation within the same AWS subnet. | +| Schema | One of: `"Disable"`, `"DoNothing"`, `"Enable"`. | +| Default | `DoNothing` | -##### `bpfMapSizeNATFrontend` +#### Egress gateway[​](#egress-gateway) -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `bpfMapSizeNATFrontend` | -| Description | Sets the size for NAT front end map. FrontendMap should be large enough to hold an entry for each nodeport, external IP and each port in each service. | -| Schema | Integer | -| Default | `65536` | +##### `egressGatewayPollFailureCount` -##### `bpfMapSizePerCpuConntrack` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------ | +| Key | `egressGatewayPollFailureCount` | +| Description | The minimum number of poll failures before a remote Egress Gateway is considered to have failed. | +| Schema | Integer | +| Default | `3` | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfMapSizePerCpuConntrack` | -| Description | Determines the size of conntrack map based on the number of CPUs. If set to a non-zero value, overrides BPFMapSizeConntrack with `BPFMapSizePerCPUConntrack * (Number of CPUs)`. This map must be large enough to hold an entry for each active connection. Warning: changing the size of the conntrack map can cause disruption. | -| Schema | Integer | -| Default | `0` | +##### `egressGatewayPollInterval` -##### `bpfMapSizeRoute` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `egressGatewayPollInterval` | +| Description | The interval at which Felix will poll remote egress gateways to check their health. Only Egress Gateways with a named "health" port will be polled in this way. Egress Gateways that fail the health check will be taken our of use as if they have been deleted. | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `10s` | -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfMapSizeRoute` | -| Description | Sets the size for the routes map. The routes map should be large enough to hold one entry per workload and a handful of entries per host (enough to cover its own IPs and tunnel IPs). | -| Schema | Integer | -| Default | `262144` | +##### `egressIPHostIfacePattern` -##### `bpfPSNATPorts` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `egressIPHostIfacePattern` | +| Description | A comma-separated list of interface names which might send and receive egress traffic across the cluster boundary, after it has left an Egress Gateway pod. Felix will ensure `src_valid_mark` sysctl flags are set correctly for matching interfaces. To target multiple interfaces with a single string, the list supports regular expressions. For regular expressions, wrap the value with `/`. Example: `/^bond/,eth0` will match all interfaces that begin with `bond` and also the interface `eth0`. | +| Schema | String. | +| Default | none | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfPSNATPorts` | -| Description | Sets the range from which we randomly pick a port if there is a source port collision. This should be within the ephemeral range as defined by RFC 6056 (1024–65535) and preferably outside the ephemeral ranges used by common operating systems. Linux uses 32768–60999, while others mostly use the IANA defined range 49152–65535. It is not necessarily a problem if this range overlaps with the operating systems. Both ends of the range are inclusive. | -| Schema | String. | -| Default | `20000:29999` | +##### `egressIPRoutingRulePriority` -##### `bpfPolicyDebugEnabled` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------ | +| Key | `egressIPRoutingRulePriority` | +| Description | Controls the priority value to use for the egress IP routing rule. | +| Schema | Integer | +| Default | `100` | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfPolicyDebugEnabled` | -| Description | When true, Felix records detailed information about the BPF policy programs, which can be examined with the calico-bpf command-line tool. | -| Schema | Boolean. | -| Default | `true` | +##### `egressIPSupport` -##### `bpfProfiling` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `egressIPSupport` | +| Description | Defines three different support modes for egress IP function. - Disabled: Egress IP function is disabled. - EnabledPerNamespace: Egress IP function is enabled and can be configured on a per-namespace basis; per-pod egress annotations are ignored. - EnabledPerNamespaceOrPerPod: Egress IP function is enabled and can be configured per-namespace or per-pod, with per-pod egress annotations overriding namespace annotations. | +| Schema | One of: `Disabled`, `EnabledPerNamespace`, `EnabledPerNamespaceOrPerPod`. | +| Default | `Disabled` | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------- | -| Key | `bpfProfiling` | -| Description | Controls profiling of BPF programs. At the monent, it can be Disabled or Enabled. | -| Schema | One of: `"Disabled"`, `"Enabled"`. | -| Default | `Disabled` | +##### `egressIPVXLANPort` -##### `bpfRedirectToPeer` +| Attribute | Value | +| ----------- | ---------------------------------------------------------- | +| Key | `egressIPVXLANPort` | +| Description | The port number of vxlan tunnel device for egress traffic. | +| Schema | Integer | +| Default | `4790` | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `bpfRedirectToPeer` | -| Description | Controls which whether it is allowed to forward straight to the peer side of the workload devices. It is allowed for any host L2 devices by default (L2Only), but it breaks TCP dump on the host side of workload device as it bypasses it on ingress. Value of Enabled also allows redirection from L3 host devices like IPIP tunnel or Wireguard directly to the peer side of the workload's device. This makes redirection faster, however, it breaks tools like tcpdump on the peer side. Use Enabled with caution. | -| Schema | One of: `"Disabled"`, `"Enabled"`, `"L2Only"`. | -| Default | `Disabled` | +##### `egressIPVXLANVNI` -#### Data plane: Windows[​](#data-plane-windows) +| Attribute | Value | +| ----------- | ----------------------------------------------------- | +| Key | `egressIPVXLANVNI` | +| Description | The VNI ID of vxlan tunnel device for egress traffic. | +| Schema | Integer | +| Default | `4097` | -##### `windowsDnsCacheFile` +#### External network support[​](#external-network-support) -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------ | -| Key | `windowsDnsCacheFile` | -| Description | The name of the file that Felix uses to preserve learnt DNS information when restarting. . | -| Schema | String. | -| Default | `c:\TigeraCalico\felix-dns-cache.txt` | +##### `externalNetworkRoutingRulePriority` -##### `windowsDnsExtraTTL` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------- | +| Key | `externalNetworkRoutingRulePriority` | +| Description | Controls the priority value to use for the external network routing rule. | +| Schema | Integer | +| Default | `102` | -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `windowsDnsExtraTTL` | -| Description | Extra time to keep IPs and alias names that are learnt from DNS, in addition to each name or IP's advertised TTL. The default value is 120s which is same as the default value of ServicePointManager.DnsRefreshTimeout on .net framework. . | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `2m0s` | +##### `externalNetworkSupport` -##### `windowsFlowLogsFileDirectory` +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `externalNetworkSupport` | +| Description | Defines two different support modes for external network function. - Disabled: External network function is disabled. - Enabled: External network function is enabled. | +| Schema | One of: `Disabled`, `Enabled`. | +| Default | `Disabled` | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------- | -| Key | `windowsFlowLogsFileDirectory` | -| Description | Sets the directory where flow logs files are stored on Windows nodes. . | -| Schema | String. | -| Default | `c:\TigeraCalico\flowlogs` | +#### Packet capture[​](#packet-capture) -##### `windowsFlowLogsPositionFilePath` +##### `captureDir` -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `windowsFlowLogsPositionFilePath` | -| Description | Used to specify the position of the external pipeline that reads flow logs on Windows nodes. . This parameter only takes effect when FlowLogsDynamicAggregationEnabled is set to true. | -| Schema | String. | -| Default | `c:\TigeraCalico\flowlogs\flows.log.pos` | +| Attribute | Value | +| ----------- | ----------------------------------------- | +| Key | `captureDir` | +| Description | Controls directory to store file capture. | +| Schema | String. | +| Default | `/var/log/calico/pcap` | -##### `windowsManageFirewallRules` +##### `captureMaxFiles` -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------- | -| Key | `windowsManageFirewallRules` | -| Description | Configures whether or not Felix will program Windows Firewall rules (to allow inbound access to its own metrics ports). | -| Schema | One of: `"Disabled"`, `"Enabled"`. | -| Default | `Disabled` | +| Attribute | Value | +| ----------- | ------------------------------------------------ | +| Key | `captureMaxFiles` | +| Description | Controls number of rotated capture file to keep. | +| Schema | Integer | +| Default | `2` | -##### `windowsNetworkName` +##### `captureMaxSizeBytes` -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `windowsNetworkName` | -| Description | Specifies which Windows HNS networks Felix should operate on. The default is to match networks that start with "calico". Supports regular expression syntax. | -| Schema | String. | -| Default | `(?i)calico.*` | +| Attribute | Value | +| ----------- | ---------------------------------------- | +| Key | `captureMaxSizeBytes` | +| Description | Controls the max size of a file capture. | +| Schema | Integer | +| Default | `10000000` | -##### `windowsStatsDumpFilePath` +##### `captureRotationSeconds` -| Attribute | Value | -| ----------- | ----------------------------------------------------------------- | -| Key | `windowsStatsDumpFilePath` | -| Description | Used to specify the path of the stats dump file on Windows nodes. | -| Schema | String. | -| Default | `c:\TigeraCalico\stats\dump` | +| Attribute | Value | +| ----------- | ----------------------------------------------- | +| Key | `captureRotationSeconds` | +| Description | Controls the time rotation of a packet capture. | +| Schema | Integer | +| Default | `3600` | -#### Data plane: OpenStack support[​](#data-plane-openstack-support) +#### L7 proxy[​](#l7-proxy) -##### `endpointReportingDelay` +##### `tproxyMode` -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `endpointReportingDelay` | -| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The delay before Felix reports endpoint status to the datastore. This is only used by the OpenStack integration. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `1s` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------ | +| Key | `tproxyMode` | +| Description | Sets whether traffic is directed through a transparent proxy for further processing or not and how is the proxying done. | +| Schema | One of: `Disabled`, `Enabled`, `EnabledAllServices`. | +| Default | `Disabled` | -##### `endpointReportingEnabled` +##### `tproxyPort` -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `endpointReportingEnabled` | -| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.Controls whether Felix reports endpoint status to the datastore. This is only used by the OpenStack integration. | -| Schema | Boolean. | -| Default | `false` | +| Attribute | Value | +| ----------- | -------------------------------------------------------- | +| Key | `tproxyPort` | +| Description | Sets to which port proxied traffic should be redirected. | +| Schema | Integer | +| Default | `16001` | -##### `metadataAddr` +##### `tproxyUpstreamConnMark` -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `metadataAddr` | -| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The IP address or domain name of the server that can answer VM queries for cloud-init metadata. In OpenStack, this corresponds to the machine running nova-api (or in Ubuntu, nova-api-metadata). A value of none (case-insensitive) means that Felix should not set up any NAT rule for the metadata path. | -| Schema | String. | -| Default | `127.0.0.1` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------- | +| Key | `tproxyUpstreamConnMark` | +| Description | Tells Felix which mark is used by the proxy for its upstream connections so that Felix can program the dataplane correctly. | +| Schema | Unsigned 32-bit integer. | +| Default | `0x17` | -##### `metadataPort` +#### Debug/test-only (generally unsupported)[​](#debugtest-only-generally-unsupported) -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `metadataPort` | -| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The port of the metadata server. This, combined with global.MetadataAddr (if not 'None'), is used to set up a NAT rule, from 169.254.169.254:80 to MetadataAddr:MetadataPort. In most cases this should not need to be changed . | -| Schema | Integer: \[0,65535] | -| Default | `8775` | +##### `debugDisableLogDropping` -##### `openstackRegion` +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `debugDisableLogDropping` | +| Description | Disables the dropping of log messages when the log buffer is full. This can significantly impact performance if log write-out is a bottleneck. | +| Schema | Boolean. | +| Default | `false` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `openstackRegion` | -| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The name of the region that a particular Felix belongs to. In a multi-region Calico/OpenStack deployment, this must be configured somehow for each Felix (here in the datamodel, or in felix.cfg or the environment on each compute node), and must match the \[calico] openstack\_region value configured in neutron.conf on each node. | -| Schema | String. | -| Default | none | +##### `debugHost` -##### `reportingInterval` +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------- | +| Key | `debugHost` | +| Description | The host IP or hostname to bind the debug port to. Only used if DebugPort is set. | +| Schema | String. | +| Default | `localhost` | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `reportingInterval` | -| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The interval at which Felix reports its status into the datastore or 0 to disable. Must be non-zero in OpenStack deployments. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `30s` | +##### `debugMemoryProfilePath` -##### `reportingTTL` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------- | +| Key | `debugMemoryProfilePath` | +| Description | The path to write the memory profile to when triggered by signal. | +| Schema | String. | +| Default | none | -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `reportingTTL` | -| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The time-to-live setting for process-wide status reports. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `1m30s` | +##### `debugPort` -#### Data plane: XDP acceleration for iptables data plane[​](#data-plane-xdp-acceleration-for-iptables-data-plane) +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `debugPort` | +| Description | If set, enables Felix's debug HTTP port, which allows memory and CPU profiles to be retrieved. The debug port is not secure, it should not be exposed to the internet. | +| Schema | Integer: \[0,65535] | +| Default | none | -##### `genericXDPEnabled` +##### `debugSimulateCalcGraphHangAfter` -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `genericXDPEnabled` | -| Description | Enables Generic XDP so network cards that don't support XDP offload or driver modes can use XDP. This is not recommended since it doesn't provide better performance than iptables. | -| Schema | Boolean. | -| Default | `false` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------- | +| Key | `debugSimulateCalcGraphHangAfter` | +| Description | Used to simulate a hang in the calculation graph after the specified duration. This is useful in tests of the watchdog system only! | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `0s` | -##### `xdpEnabled` +##### `debugSimulateDataplaneApplyDelay` -| Attribute | Value | -| ----------- | -------------------------------------------------------------------- | -| Key | `xdpEnabled` | -| Description | Enables XDP acceleration for suitable untracked incoming deny rules. | -| Schema | Boolean. | -| Default | `false` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `debugSimulateDataplaneApplyDelay` | +| Description | Adds an artificial delay to every dataplane operation. This is useful for simulating a heavily loaded system for test purposes only. | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `0s` | -##### `xdpRefreshInterval` +##### `debugSimulateDataplaneHangAfter` -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `xdpRefreshInterval` | -| Description | The period at which Felix re-checks all XDP state to ensure that no other process has accidentally broken Calico's BPF maps or attached programs. Set to 0 to disable XDP refresh. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `1m30s` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------- | +| Key | `debugSimulateDataplaneHangAfter` | +| Description | Used to simulate a hang in the dataplane after the specified duration. This is useful in tests of the watchdog system only! | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `0s` | -#### Overlay: VXLAN overlay[​](#overlay-vxlan-overlay) +##### `statsDumpFilePath` -##### `vxlanEnabled` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------- | +| Key | `statsDumpFilePath` | +| Description | The path to write a diagnostic flow logs statistics dump to when triggered by signal. | +| Schema | String. | +| Default | `/var/log/calico/stats/dump` | -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `vxlanEnabled` | -| Description | Overrides whether Felix should create the VXLAN tunnel device for IPv4 VXLAN networking. Optional as Felix determines this based on the existing IP pools. | -| Schema | Boolean. | -| Default | none | +#### Usage reporting[​](#usage-reporting) -##### `vxlanMTU` +##### `usageReportingEnabled` -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------- | -| Key | `vxlanMTU` | -| Description | The MTU to set on the IPv4 VXLAN tunnel device. Optional as Felix auto-detects the MTU based on the MTU of the host's interfaces. | -| Schema | Integer | -| Default | `0` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------- | +| Key | `usageReportingEnabled` | +| Description | Unused in Calico Enterprise, usage reporting is permanently disabled. | +| Schema | Boolean. | +| Default | `true` | -##### `vxlanMTUV6` +##### `usageReportingInitialDelay` -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------- | -| Key | `vxlanMTUV6` | -| Description | The MTU to set on the IPv6 VXLAN tunnel device. Optional as Felix auto-detects the MTU based on the MTU of the host's interfaces. | -| Schema | Integer | -| Default | `0` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------- | +| Key | `usageReportingInitialDelay` | +| Description | Unused in Calico Enterprise, usage reporting is permanently disabled. | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `5m0s` | -##### `vxlanPort` +##### `usageReportingInterval` -| Attribute | Value | -| ----------- | --------------------------------------------- | -| Key | `vxlanPort` | -| Description | The UDP port number to use for VXLAN traffic. | -| Schema | Integer | -| Default | `4789` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------- | +| Key | `usageReportingInterval` | +| Description | Unused in Calico Enterprise, usage reporting is permanently disabled. | +| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | +| Default | `24h0m0s` | -##### `vxlanVNI` +### Health Timeout Overrides[​](#health-timeout-overrides) -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------ | -| Key | `vxlanVNI` | -| Description | The VXLAN VNI to use for VXLAN traffic. You may need to change this if the default value is in use on your system. | -| Schema | Integer | -| Default | `4096` | +Felix has internal liveness and readiness watchdog timers that monitor its various loops. If a loop fails to "check in" within the allotted timeout then Felix will report non-Ready or non-Live on its health port (which is monitored by kubelet in a Kubernetes system). If Felix reports non-Live, this can result in the Pod being restarted. -#### Overlay: IP-in-IP[​](#overlay-ip-in-ip) +In Kubernetes, if you see the calico-node Pod readiness or liveness checks fail intermittently, check the calico-node Pod log for a log from Felix that gives the overall health status (the list of components will depend on which features are enabled): -##### `ipipEnabled` +```text ++---------------------------+---------+----------------+-----------------+--------+ -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `ipipEnabled` | -| Description | Overrides whether Felix should configure an IPIP interface on the host. Optional as Felix determines this based on the existing IP pools. | -| Schema | Boolean. | -| Default | none | +| COMPONENT | TIMEOUT | LIVENESS | READINESS | DETAIL | -##### `ipipMTU` ++---------------------------+---------+----------------+-----------------+--------+ -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `ipipMTU` | -| Description | Controls the MTU to set on the IPIP tunnel device. Optional as Felix auto-detects the MTU based on the MTU of the host's interfaces. | -| Schema | Integer | -| Default | `0` | +| CalculationGraph | 30s | reporting live | reporting ready | | -#### Overlay: WireGuard[​](#overlay-wireguard) +| FelixStartup | 0s | reporting live | reporting ready | | -No matching group found for 'Overlay: WireGuard'. +| InternalDataplaneMainLoop | 1m30s | reporting live | reporting ready | | -#### Overlay: IPSec[​](#overlay-ipsec) ++---------------------------+---------+----------------+-----------------+--------+ +``` -##### `ipsecAllowUnsecuredTraffic` +If some health timeouts show as "timed out" it may help to apply an override using the `healthTimeoutOverrides` field: -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `ipsecAllowUnsecuredTraffic` | -| Description | Controls whether non-IPsec traffic is allowed in addition to IPsec traffic. Enabling this negates the anti-spoofing protections of IPsec but it is useful when migrating to/from IPsec. | -| Schema | Boolean. | -| Default | `false` | +```yaml +... -##### `ipsecESPAlgorithm` +spec: -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------- | -| Key | `ipsecESPAlgorithm` | -| Description | IPSecESAlgorithm sets IPSec ESP algorithm. Default is NIST suite B recommendation. | -| Schema | String. | -| Default | `aes128gcm16-ecp256` | + healthTimeoutOverrides: -##### `ipsecIKEAlgorithm` + - name: InternalDataplaneMainLoop -| Attribute | Value | -| ----------- | ----------------------------------------------------------------- | -| Key | `ipsecIKEAlgorithm` | -| Description | Sets IPSec IKE algorithm. Default is NIST suite B recommendation. | -| Schema | String. | -| Default | `aes128gcm16-prfsha256-ecp256` | + timeout: "5m" -##### `ipsecLogLevel` + - name: CalculationGraph -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `ipsecLogLevel` | -| Description | Controls log level for IPSec components. Set to None for no logging. A generic log level terminology is used \[None, Notice, Info, Debug, Verbose]. | -| Schema | One of: `Debug`, `Info`, `None`, `Notice`, `Verbose`. | -| Default | `Info` | + timeout: "1m30s" -##### `ipsecMode` + ... +``` -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------ | -| Key | `ipsecMode` | -| Description | Controls which mode IPSec is operating on. Default value means IPSec is not enabled. | -| Schema | String. | -| Default | none | +A timeout value of 0 disables the timeout. -##### `ipsecPolicyRefreshInterval` +### ProtoPort[​](#protoport) -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------- | -| Key | `ipsecPolicyRefreshInterval` | -| Description | The interval at which Felix will check the kernel's IPsec policy tables and repair any inconsistencies. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `10m0s` | +| Field | Description | Accepted Values | Schema | +| -------- | -------------------- | ------------------------------------ | ------ | +| port | The exact port match | 0-65535 | int | +| protocol | The protocol match | tcp, udp, sctp | string | +| net | The CIDR match | any valid CIDR (e.g. 192.168.0.0/16) | string | -#### Flow logs: Prometheus reports[​](#flow-logs-prometheus-reports) +Keep in mind that in the following example, `net: ""` and `net: "0.0.0.0/0"` are processed as the same in the policy enforcement. -##### `deletedMetricsRetentionSecs` +```yaml + ... -| Attribute | Value | -| ----------- | -------------------------------------------------------------- | -| Key | `deletedMetricsRetentionSecs` | -| Description | Controls how long metrics are retianed after the flow is gone. | -| Schema | Integer. | -| Default | `30s` | +spec: -##### `prometheusReporterCAFile` + failsafeInboundHostPorts: -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------- | -| Key | `prometheusReporterCAFile` | -| Description | The path to the TLS CA file for the Prometheus per-flow metrics reporter. | -| Schema | String. | -| Default | none | + - net: "192.168.1.1/32" -##### `prometheusReporterCertFile` + port: 22 -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------- | -| Key | `prometheusReporterCertFile` | -| Description | The path to the TLS certificate file for the Prometheus per-flow metrics reporter. | -| Schema | String. | -| Default | none | + protocol: tcp -##### `prometheusReporterEnabled` + - net: "" -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | -| Key | `prometheusReporterEnabled` | -| Description | Controls whether the Prometheus per-flow metrics reporter is enabled. This is used to show real-time flow metrics in the UI. | -| Schema | Boolean. | -| Default | `false` | + port: 67 -##### `prometheusReporterKeyFile` + protocol: udp -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------- | -| Key | `prometheusReporterKeyFile` | -| Description | The path to the TLS private key file for the Prometheus per-flow metrics reporter. | -| Schema | String. | -| Default | none | +failsafeOutboundHostPorts: -##### `prometheusReporterPort` + - net: "0.0.0.0/0" -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------- | -| Key | `prometheusReporterPort` | -| Description | The port that the Prometheus per-flow metrics reporter should bind to. | -| Schema | Integer: \[0,65535] | -| Default | `9092` | + port: 67 -#### Flow logs: Syslog reports[​](#flow-logs-syslog-reports) + protocol: udp -##### `syslogReporterAddress` + ... +``` -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `syslogReporterAddress` | -| Description | The address to dial to when writing to Syslog. For TCP and UDP networks, the address has the form "host:port". The host must be a literal IP address, or a host name that can be resolved to IP addresses. The port must be a literal port number or a service name. For more, see: https\://pkg.go.dev/net#Dial. | -| Schema | String. | -| Default | none | +### AggregationKind[​](#aggregationkind) -##### `syslogReporterEnabled` +| Value | Description | +| ----- | ---------------------------------------------------------------------------------------- | +| 0 | No aggregation | +| 1 | Aggregate all flows that share a source port on each node | +| 2 | Aggregate all flows that share source ports or are from the same ReplicaSet on each node | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `syslogReporterEnabled` | -| Description | Turns on the feature to write logs to Syslog. Please note that this can incur significant disk space usage when running felix on non-cluster hosts. | -| Schema | Boolean. | -| Default | `false` | +### DNSPolicyMode[​](#dnspolicymode) -##### `syslogReporterNetwork` +| Value | Description | +| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| DelayDeniedPacket (default) | Felix delays any denied packet that traversed a policy that included egress domain matches, but did not match. The packet is released after a fixed time, or after the destination IP address was programmed. | +| DelayDNSResponse | Felix delays any DNS response until related IPSets are programmed. This introduces some latency to all DNS packets (even when no IPSet programming is required), but it ensures policy hit statistics are accurate. This is the recommended setting when you are making use of staged policies or policy rule hit statistics. A Linux kernel version of 3.13 or greater is required to use `DelayDNSResponse`. For earlier kernel versions, this value is modified to `DelayDeniedPacket`. | +| Inline | Parses DNS response inline with DNS response packet processing within iptables. This guarantees the DNS rules reflect any change immediately. This mode works for iptables only and matches the same mode for `BPFDNSPolicyMode`. This setting is ignored on Windows and `NoDelay` is always used. | +| NoDelay | Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs. | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `syslogReporterNetwork` | -| Description | The network to dial to when writing to Syslog. Known networks are "tcp", "tcp4" (IPv4-only), "tcp6" (IPv6-only), "udp", "udp4" (IPv4-only), "udp6" (IPv6-only), "ip", "ip4" (IPv4-only), "ip6" (IPv6-only), "unix", "unixgram" and "unixpacket". For more, see: https\://pkg.go.dev/net#Dial. | -| Schema | String. | -| Default | none | +On Windows, or when using the eBPF data plane, this setting is ignored. Windows always uses `NoDelay` while eBPF has its own [BPFDNSPolicyMode](#bpfdnspolicymode) option. -#### Flow logs: file reports[​](#flow-logs-file-reports) +### BPFDNSPolicyMode[​](#bpfdnspolicymode) -##### `flowLogsAggregationThresholdBytes` +| Value | Description | +| ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Inline | Felix does not introduce any delay to any packets. Felix's eBPF programs parse DNS responses and program policy rules immediately, before the DNS response is passed to the application. This only applies to wildcard prefixes: `*.x.y.z` will be processed in this manner, but wildcards such as `x.*.y.z` will not match. A Linux kernel version of 5.17 or greater is required. | +| NoDelay | Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs. | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsAggregationThresholdBytes` | -| Description | Used specify how far behind the external pipeline that reads flow logs can be. Default is 8192 bytes. This parameter only takes effect when FlowLogsDynamicAggregationEnabled is set to true. | -| Schema | Integer | -| Default | `8192` | +### RouteTableRange[​](#routetablerange) -##### `flowLogsCollectProcessInfo` +The `RouteTableRange` option is now deprecated in favor of [RouteTableRanges](#routetableranges). -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------- | -| Key | `flowLogsCollectProcessInfo` | -| Description | If enabled Felix will load the kprobe BPF programs to collect process info. | -| Schema | Boolean. | -| Default | `false` | +| Field | Description | Accepted Values | Schema | +| ----- | -------------------- | --------------- | ------ | +| min | Minimum index to use | 1-250 | int | +| max | Maximum index to use | 1-250 | int | -##### `flowLogsCollectProcessPath` +### RouteTableRanges[​](#routetableranges) -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsCollectProcessPath` | -| Description | When FlowLogsCollectProcessPath and FlowLogsCollectProcessInfo are both enabled, each flow log will include information about the process that is sending or receiving the packets in that flow: the `process_name` field will contain the full path of the process executable, and the `process_args` field will have the arguments with which the executable was invoked. Process information will not be reported for connections which use raw sockets. | -| Schema | Boolean. | -| Default | `false` | +`RouteTableRanges` is a list of `RouteTableRange` objects: -##### `flowLogsCollectTcpStats` +| Field | Description | Accepted Values | Schema | +| ----- | -------------------- | --------------- | ------ | +| min | Minimum index to use | 1 - 4294967295 | int | +| max | Maximum index to use | 1 - 4294967295 | int | -| Attribute | Value | -| ----------- | --------------------------------------------- | -| Key | `flowLogsCollectTcpStats` | -| Description | Enables flow logs reporting TCP socket stats. | -| Schema | Boolean. | -| Default | `false` | +Each item in the `RouteTableRanges` list designates a range of routing tables available to Calico. By default, Calico will use a single range of `1-250`. If a range spans Linux's reserved table range (`253-255`) then those tables are automatically excluded from the list. It's possible that other table ranges may also be reserved by third-party systems unknown to Calico. In that case, multiple ranges can be defined to target tables below and above the sensitive ranges: -##### `flowLogsCollectorDebugTrace` +```sh + target tables 65-99, and 256-1000, skipping 100-255 -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsCollectorDebugTrace` | -| Description | When FlowLogsCollectorDebugTrace is set to true, enables the logs in the collector to be printed in their entirety. | -| Schema | Boolean. | -| Default | `false` | +calicoctl patch felixconfig default --type=merge -p '{"spec":{"routeTableRanges": [{"min": 65, "max": 99}, {"min": 256, "max": 1000}] }} +``` -##### `flowLogsDestDomainsByClient` +*Note*, for performance reasons, the maximum total number of routing tables that Felix will accept is 65535 (or 2\*16). -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------- | -| Key | `flowLogsDestDomainsByClient` | -| Description | Used to configure if the source IP is used in the mapping of top level destination domains. | -| Schema | Boolean. | -| Default | `true` | +Specifying both the `RouteTableRange` and `RouteTableRanges` arguments is not supported and will result in an error from the api. -##### `flowLogsDynamicAggregationEnabled` +### AWS IAM Role/Policy for source-destination-check configuration[​](#aws-iam-rolepolicy-for-source-destination-check-configuration) -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------- | -| Key | `flowLogsDynamicAggregationEnabled` | -| Description | Used to enable/disable dynamically changing aggregation levels. Default is true. | -| Schema | Boolean. | -| Default | `false` | +Setting `awsSrcDstCheck` to `Disable` will automatically disable source-destination-check on EC2 instances in a cluster, provided necessary IAM roles and policies are set. One of the policies assigned to IAM role of cluster nodes must contain a statement similar to the following: -##### `flowLogsEnableHostEndpoint` +```text +{ -| Attribute | Value | -| ----------- | ---------------------------------------------- | -| Key | `flowLogsEnableHostEndpoint` | -| Description | Enables Flow logs reporting for HostEndpoints. | -| Schema | Boolean. | -| Default | `false` | + "Effect": "Allow", -##### `flowLogsEnableNetworkSets` + "Action": [ -| Attribute | Value | -| ----------- | -------------------------------------------------- | -| Key | `flowLogsEnableNetworkSets` | -| Description | Enables Flow logs reporting for GlobalNetworkSets. | -| Schema | Boolean. | -| Default | `false` | + "ec2:DescribeInstances", -##### `flowLogsFileAggregationKindForAllowed` + "ec2:ModifyNetworkInterfaceAttribute" -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsFileAggregationKindForAllowed` | -| Description | Used to choose the type of aggregation for flow log entries created for allowed connections. . Accepted values are 0, 1 and 2. 0 - No aggregation. 1 - Source port based aggregation. 2 - Pod prefix name based aggreagation. | -| Schema | One of: `0`, `1`, `2`. | -| Default | `2` | + ], -##### `flowLogsFileAggregationKindForDenied` + "Resource": "*" -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsFileAggregationKindForDenied` | -| Description | Used to choose the type of aggregation for flow log entries created for denied connections. . Accepted values are 0, 1 and 2. 0 - No aggregation. 1 - Source port based aggregation. 2 - Pod prefix name based aggregation. 3 - No destination ports based aggregation. | -| Schema | One of: `0`, `1`, `2`, `3`. | -| Default | `1` | +} +``` -##### `flowLogsFileDirectory` +If there are no policies attached to node roles containing the above statement, attach a new policy. For example, if a node role is `test-cluster-nodeinstance-role`, click on the IAM role in AWS console. In the `Permission policies` list, add a new inline policy with the above statement to the new policy JSON definition. For detailed information, see [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html?icmpid=docs_iam_console). -| Attribute | Value | -| ----------- | ---------------------------------------------------- | -| Key | `flowLogsFileDirectory` | -| Description | Sets the directory where flow logs files are stored. | -| Schema | String. | -| Default | `/var/log/calico/flowlogs` | +For an EKS cluster, the necessary IAM role and policy is available by default. No further actions are needed. -##### `flowLogsFileDomainsLimit` +## Supported operations[​](#supported-operations) -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsFileDomainsLimit` | -| Description | Used to configure the number of (destination) domains to include in the flow log. These are not included for workload or host endpoint destinations. | -| Schema | Integer | -| Default | `5` | +| Datastore type | Create | Delete | Delete (Global `default`) | Update | Get/List | Notes | +| --------------------- | ------ | ------ | ------------------------- | ------ | -------- | ----- | +| Kubernetes API server | Yes | Yes | No | Yes | Yes | | -##### `flowLogsFileEnabled` +### Global Alert -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------- | -| Key | `flowLogsFileEnabled` | -| Description | When set to true, enables logging flow logs to a file. If false no flow logging to file will occur. | -| Schema | Boolean. | -| Default | `false` | +A global alert resource represents a query that is periodically run against data sets collected by Calico Enterprise whose findings are added to the Alerts page in the Calico Enterprise web console. Alerts may search for the existence of rows in a query, or when aggregated metrics satisfy a condition. -##### `flowLogsFileEnabledForAllowed` +Calico Enterprise supports alerts on the following data sets: -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsFileEnabledForAllowed` | -| Description | Used to enable/disable flow logs entries created for allowed connections. Default is true. This parameter only takes effect when FlowLogsFileReporterEnabled is set to true. | -| Schema | Boolean. | -| Default | `true` | +- [Audit logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/audit-overview) +- [DNS logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/dns/) +- [Flow logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/flow/) +- [L7 logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/l7/) -##### `flowLogsFileEnabledForDenied` +For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases can be used to specify the resource type on the CLI: `globalalert.projectcalico.org`, `globalalerts.projectcalico.org` and abbreviations such as `globalalert.p` and `globalalerts.p`. -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsFileEnabledForDenied` | -| Description | Used to enable/disable flow logs entries created for denied flows. Default is true. This parameter only takes effect when FlowLogsFileReporterEnabled is set to true. | -| Schema | Boolean. | -| Default | `true` | +## Sample YAML[​](#sample-yaml) -##### `flowLogsFileIncludeLabels` +```yaml +apiVersion: projectcalico.org/v3 -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------- | -| Key | `flowLogsFileIncludeLabels` | -| Description | Used to configure if endpoint labels are included in a Flow log entry written to file. | -| Schema | Boolean. | -| Default | `false` | +kind: GlobalAlert -##### `flowLogsFileIncludePolicies` +metadata: -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------- | -| Key | `flowLogsFileIncludePolicies` | -| Description | Used to configure if policy information are included in a Flow log entry written to file. | -| Schema | Boolean. | -| Default | `false` | + name: sample -##### `flowLogsFileIncludeService` +spec: -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsFileIncludeService` | -| Description | Used to configure if the destination service is included in a Flow log entry written to file. The service information can only be included if the flow was explicitly determined to be directed at the service (e.g. when the pre-DNAT destination corresponds to the service ClusterIP and port). | -| Schema | Boolean. | -| Default | `false` | + summary: 'Sample' -##### `flowLogsFileMaxFileSizeMB` + description: 'Sample ${source_namespace}/${source_name_aggr}' -| Attribute | Value | -| ----------- | ----------------------------------------------------------- | -| Key | `flowLogsFileMaxFileSizeMB` | -| Description | Sets the max size in MB of flow logs files before rotation. | -| Schema | Integer | -| Default | `100` | + severity: 100 -##### `flowLogsFileMaxFiles` + dataSet: flows -| Attribute | Value | -| ----------- | ------------------------------------- | -| Key | `flowLogsFileMaxFiles` | -| Description | Sets the number of log files to keep. | -| Schema | Integer | -| Default | `5` | + query: action=allow -##### `flowLogsFileNatOutgoingPortLimit` + aggregateBy: [source_namespace, source_name_aggr] -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsFileNatOutgoingPortLimit` | -| Description | Used to specify the maximum number of distinct post SNAT ports that will appear in the flowLogs. Default value is 3. | -| Schema | Integer | -| Default | `3` | + field: num_flows -##### `flowLogsFilePerFlowProcessArgsLimit` + metric: sum -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsFilePerFlowProcessArgsLimit` | -| Description | Used to specify the maximum number of distinct process args that will appear in the flowLogs. Default value is 5. | -| Schema | Integer | -| Default | `5` | + condition: gt -##### `flowLogsFilePerFlowProcessLimit` + threshold: 0 +``` -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsFilePerFlowProcessLimit` | -| Description | Used to specify the maximum number of flow log entries with distinct process information beyond which process information will be aggregated. | -| Schema | Integer | -| Default | `2` | +## GlobalAlert definition[​](#globalalert-definition) -##### `flowLogsFlushInterval` +### Metadata[​](#metadata) -| Attribute | Value | -| ----------- | --------------------------------------------------------- | -| Key | `flowLogsFlushInterval` | -| Description | Configures the interval at which Felix exports flow logs. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `5m0s` | +| Field | Description | Accepted Values | Schema | +| ----- | ----------------------- | ----------------------------------------- | ------ | +| name | The name of this alert. | Lower-case alphanumeric with optional `-` | string | -##### `flowLogsGoldmaneServer` +### Spec[​](#spec) -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------- | -| Key | `flowLogsGoldmaneServer` | -| Description | FlowLogGoldmaneServer is the flow server endpoint to which flow data should be published. | -| Schema | String. | -| Default | none | +| Field | Description | Type | Required | Acceptable Values | Default | +| ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- | ----------------------------------------- | ----------------------------- | -------------------------------- | +| type | Type will dictate how the fields of the GlobalAlert will be utilized. Each `type` will have different usages and/or defaults for the other GlobalAlert fields as described in the table. | string | no | RuleBased | RuleBased | +| description | Human-readable description of the template. | string | yes | | | +| summary | Template for the description field in generated events. See the summary section below for more details. `description` is used if this is omitted. | string | no | | | +| severity | Severity of the alert for display in Manager. | int | yes | 1 - 100 | | +| dataSet | Which data set to execute the alert against. | string | if `type` is `RuleBased` | audit, dns, flows, l7 | | +| period | How often the query defined will run, if `type` is `RuleBased`. | duration | no | 1h 2m 3s | 5m, 15m if `type` is `RuleBased` | +| lookback | Specifies how far back in time data is to be collected. Must exceed audit log flush interval, `dnsLogsFlushInterval`, or `flowLogsFlushInterval` as appropriate. | duration | no | 1h 2m 3s | 10m | +| query | Which data to include from the source data set. Written in a domain-specific query language. See the query section below. | string | no | | | +| aggregateBy | An optional list of fields to aggregate results. | string array | no | | | +| field | Which field to aggregate results by if using a metric other than count. | string | if metric is one of avg, max, min, or sum | | | +| metric | A metric to apply to aggregated results. `count` is the number of log entries matching the aggregation pattern. Others are applied only to numeric fields in the logs. | string | no | avg, max, min, sum, count | | +| condition | Compare the value of the metric to the threshold using this condition. | string | if metric defined | eq, not\_eq, lt, lte, gt, gte | | +| threshold | A numeric value to compare the value of the metric against. | float | if metric defined | | | +| substitutions | An optional list of values to replace variable names in query. | List of [GlobalAlertSubstitution](#globalalertsubstitution) | no | | | -##### `flowLogsLocalReporter` +### GlobalAlertSubstitution[​](#globalalertsubstitution) -| Attribute | Value | -| ----------- | -------------------------------------------------------------------- | -| Key | `flowLogsLocalReporter` | -| Description | Configures local Unix socket for reporting flow data from each node. | -| Schema | One of: `"Disabled"`, `"Enabled"`. | -| Default | `Disabled` | +| Field | Description | Type | Required | +| ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ | -------- | +| name | The name of the global alert substitution. It will be referenced by the variable names in query. Duplicate names are not allowed in the substitutions list. | string | yes | +| values | A list of values for this substitution. Wildcard operators asterisk (`*`) and question mark (`?`) are supported. | string array | yes | -##### `flowLogsMaxOriginalIPsIncluded` +### Status[​](#status) -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------- | -| Key | `flowLogsMaxOriginalIPsIncluded` | -| Description | Specifies the number of unique IP addresses (if relevant) that should be included in Flow logs. | -| Schema | Integer | -| Default | `50` | +| Field | Description | +| --------------- | ------------------------------------------------------------------------------------------- | +| lastUpdate | When the alert was last modified on the backend. | +| active | Whether the alert is active on the backend. | +| healthy | Whether the alert is in an error state or not. | +| lastExecuted | When the query for the alert last ran. | +| lastEvent | When the condition of the alert was last satisfied and an alert was successfully generated. | +| errorConditions | List of errors preventing operation of the updates or search. | -##### `flowLogsPolicyEvaluationMode` +## Query[​](#query) -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsPolicyEvaluationMode` | -| Description | Defines how policies are evaluated and reflected in flow logs. OnNewConnection - In this mode, staged policies are only evaluated when new connections are made in the dataplane. Staged/active policy changes will not be reflected in the `pending_policies` field of flow logs for long lived connections. Continuous - Felix evaluates active flows on a regular basis to determine the rule traces in the flow logs. Any policy updates that impact a flow will be reflected in the pending\_policies field, offering a near-real-time view of policy changes across flows. | -| Schema | String. | -| Default | `Continuous` | +Alerts use a domain-specific query language to select which records from the data set should be used in the alert. This could be used to identify flows with specific features, or to select (or omit) certain namespaces from consideration. -##### `flowLogsPolicyScope` +The query language is composed of any number of selectors, combined with boolean expressions (`AND`, `OR`, and `NOT`), set expressions (`IN` and `NOTIN`) and bracketed subexpressions. These are translated by Calico Enterprise to Elastic DSL queries that are executed on the backend. -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsPolicyScope` | -| Description | Controls which policies are included in flow logs. AllPolicies - Processes both transit policies for the local node and endpoint policies derived from packet source/destination IPs. Provides comprehensive visibility into all policy evaluations but increases log volume. EndpointPolicies - Processes only policies for endpoints identified as the source or destination of the packet (whether workload or host endpoints). | -| Schema | String. | -| Default | `EndpointPolicies` | +Set expressions support wildcard operators asterisk (`*`) and question mark (`?`). The asterisk sign matches zero or more characters and the question mark matches a single character. Set values can be embedded into the query string or reference the values in the global alert substitution list. -##### `flowLogsPositionFilePath` +A selector consists of a key, comparator, and value. Keys and values may be identifiers consisting of alphanumerics and underscores (`_`) with the first character being alphabetic or an underscore, or may be quoted strings. Values may also be integer or floating point numbers. Comparators may be `=` (equal), `!=` (not equal), `<` (less than), `<=` (less than or equal), `>` (greater than), or `>=` (greater than or equal). -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `flowLogsPositionFilePath` | -| Description | Used specify the position of the external pipeline that reads flow logs. Default is /var/log/calico/flows.log.pos. This parameter only takes effect when FlowLogsDynamicAggregationEnabled is set to true. | -| Schema | String. | -| Default | `/var/log/calico/flows.log.pos` | +Keys must be indexed fields in their corresponding data set. See the appendix for a list of valid keys in each data set. -#### DNS logs / policy[​](#dns-logs--policy) +Examples: -##### `dnsCacheEpoch` +- `query: "count > 0"` +- `query: "\"servers.ip\" = \"127.0.0.1\""` -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------- | -| Key | `dnsCacheEpoch` | -| Description | An arbitrary number that can be changed, at runtime, to tell Felix to discard all its learnt DNS information. . | -| Schema | Integer | -| Default | `0` | +Selectors may be combined using `AND`, `OR`, and `NOT` boolean expressions, `IN` and `NOTIN` set expressions, and bracketed subexpressions. -##### `dnsCacheFile` +Examples: -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------ | -| Key | `dnsCacheFile` | -| Description | The name of the file that Felix uses to preserve learnt DNS information when restarting. . | -| Schema | String. | -| Default | `/var/run/calico/felix-dns-cache.txt` | +- `query: "count > 100 AND client_name=mypod"` +- `query: "client_namespace = ns1 OR client_namespace = ns2"` +- `query: "count > 100 AND NOT (client_namespace = ns1 OR client_namespace = ns2)"` +- `query: "(qtype = A OR qtype = AAAA) AND rcode != NoError"` +- `query: "process_name IN {\"proc1?\", \"*proc2\"} AND source_namespace = ns1` +- `query: "qname NOTIN ${domains}"` -##### `dnsCacheSaveInterval` +## Aggregation[​](#aggregation) -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------- | -| Key | `dnsCacheSaveInterval` | -| Description | The periodic interval at which Felix saves learnt DNS information to the cache file. . | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `1m0s` | +Results from the query can be aggregated by any number of data fields. Only these data fields will be included in the generated alerts, and each unique combination of aggregations will generate a unique alert. Careful consideration of fields for aggregation will yield the best results. -##### `dnsExtraTTL` +Some good choices for aggregations on the `flows` data set are `[source_namespace, source_name_aggr, source_name]`, `[source_ip]`, `[dest_namespace, dest_name_aggr, dest_name]`, and `[dest_ip]` depending on your use case. For the `dns` data set, `[client_namespace, client_name_aggr, client_name]` is a good choice for an aggregation pattern. -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------- | -| Key | `dnsExtraTTL` | -| Description | Extra time to keep IPs and alias names that are learnt from DNS, in addition to each name or IP's advertised TTL. . | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `0s` | +## Metrics and conditions[​](#metrics-and-conditions) -##### `dnsLogsFileAggregationKind` +Results from the query can be further aggregated using a metric that is applied to a numeric field, or counts the number of rows in an aggregation. Search hits satisfying the condition are output as alerts. -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `dnsLogsFileAggregationKind` | -| Description | Used to choose the type of aggregation for DNS log entries. . Accepted values are 0 and 1. 0 - No aggregation. 1 - Aggregate over clients with the same name prefix. | -| Schema | One of: `0`, `1`. | -| Default | `1` | +| Metric | Description | Applied to Field | +| ------ | ---------------------------------- | ---------------- | +| count | Counts the number of rows | No | +| min | The minimal value of the field | Yes | +| max | The maximal value of the field | Yes | +| sum | The sum of all values of the field | Yes | +| avg | The average value of the field | Yes | -##### `dnsLogsFileDirectory` +| Condition | Description | +| --------- | --------------------- | +| eq | Equals | +| not\_eq | Not equals | +| lt | Less than | +| lte | Less than or equal | +| gt | Greater than | +| gte | Greater than or equal | -| Attribute | Value | -| ----------- | -------------------------------------------------- | -| Key | `dnsLogsFileDirectory` | -| Description | Sets the directory where DNS log files are stored. | -| Schema | String. | -| Default | `/var/log/calico/dnslogs` | +Example: -##### `dnsLogsFileEnabled` +```yaml +apiVersion: projectcalico.org/v3 -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------- | -| Key | `dnsLogsFileEnabled` | -| Description | Controls logging DNS logs to a file. If false no DNS logging to file will occur. | -| Schema | Boolean. | -| Default | `false` | +kind: GlobalAlert -##### `dnsLogsFileIncludeLabels` +metadata: -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------- | -| Key | `dnsLogsFileIncludeLabels` | -| Description | Used to configure if endpoint labels are included in a DNS log entry written to file. | -| Schema | Boolean. | -| Default | `true` | + name: frequent-dns-responses -##### `dnsLogsFileMaxFileSizeMB` +spec: -| Attribute | Value | -| ----------- | --------------------------------------------------------- | -| Key | `dnsLogsFileMaxFileSizeMB` | -| Description | Sets the max size in MB of DNS log files before rotation. | -| Schema | Integer | -| Default | `100` | + description: 'Monitor for NXDomain' -##### `dnsLogsFileMaxFiles` + summary: 'Observed ${sum} NXDomain responses for ${qname}' -| Attribute | Value | -| ----------- | ----------------------------------------- | -| Key | `dnsLogsFileMaxFiles` | -| Description | Sets the number of DNS log files to keep. | -| Schema | Integer | -| Default | `5` | + severity: 100 -##### `dnsLogsFilePerNodeLimit` + dataSet: dns -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `dnsLogsFilePerNodeLimit` | -| Description | Limit on the number of DNS logs that can be emitted within each flush interval. When this limit has been reached, Felix counts the number of unloggable DNS responses within the flush interval, and emits a WARNING log with that count at the same time as it flushes the buffered DNS logs. | -| Schema | Integer | -| Default | `0` | + query: rcode = NXDomain AND (rtype = A or rtype = AAAA) -##### `dnsLogsFlushInterval` + aggregateBy: qname -| Attribute | Value | -| ----------- | -------------------------------------------------------- | -| Key | `dnsLogsFlushInterval` | -| Description | Configures the interval at which Felix exports DNS logs. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `5m0s` | + field: count -##### `dnsLogsLatency` + metric: sum -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------- | -| Key | `dnsLogsLatency` | -| Description | Indicates to include measurements of DNS request/response latency in each DNS log. | -| Schema | Boolean. | -| Default | `true` | + condition: gte -##### `dnsPacketsNfqueueID` + threshold: 100 +``` -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `dnsPacketsNfqueueID` | -| Description | The NFQUEUE ID to use for capturing DNS packets to ensure programming IPSets occurs before the response is released. Used when DNSPolicyMode is DelayDNSResponse. | -| Schema | Integer | -| Default | `101` | +This alert identifies non-existing DNS responses for Internet addresses that were observed more than 100 times in the past 10 minutes. -##### `dnsPacketsNfqueueMaxHoldDuration` +### Unconditional alerts[​](#unconditional-alerts) -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `dnsPacketsNfqueueMaxHoldDuration` | -| Description | The max length of time to hold on to a DNS response while waiting for the the dataplane to be programmed. Used when DNSPolicyMode is DelayDNSResponse. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `3s` | +If the `field`, `metric`, `condition`, and `threshold` fields of an alert are left blank then the alert will trigger whenever its query returns any data. Each hit (or aggregation pattern, if `aggregateBy` is non-empty) returned will cause an event to be created. This should be used **only** when the query is highly specific to avoid filling the Alerts page and index with a large number of events. The use of `aggregateBy` is strongly recommended to reduce the number of entries added to the Alerts page. -##### `dnsPacketsNfqueueSize` +The following example would alert on incoming connections to postgres pods from the Internet that were not denied by policy. It runs hourly to reduce the noise. Noise could be further reduced by removing `source_ip` from the `aggregateBy` clause at the cost of removing `source_ip` from the generated events. -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `dnsPacketsNfqueueSize` | -| Description | The size of the NFQUEUE for captured DNS packets. This is the maximum number of DNS packets that may be queued awaiting programming in the dataplane. Used when DNSPolicyMode is DelayDNSResponse. | -| Schema | Integer | -| Default | `100` | +```yaml +period: 1h -##### `dnsPolicyMode` +lookback: 75m -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `dnsPolicyMode` | -| Description | Specifies how DNS policy programming will be handled. DelayDeniedPacket - Felix delays any denied packet that traversed a policy that included egress domain matches, but did not match. The packet is released after a fixed time, or after the destination IP address was programmed. DelayDNSResponse - Felix delays any DNS response until related IPSets are programmed. This introduces some latency to all DNS packets (even when no IPSet programming is required), but it ensures policy hit statistics are accurate. This is the recommended setting when you are making use of staged policies or policy rule hit statistics. NoDelay - Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs.Inline - Parses DNS response inline with DNS response packet processing within IPTables. This guarantees the DNS rules reflect any change immediately. This mode works for iptables only and matches the same mode for BPFDNSPolicyMode. This setting is ignored on Windows and "NoDelay" is always used.This setting is ignored by eBPF and BPFDNSPolicyMode is used instead.This field has no effect in NFTables mode. Please use NFTablesDNSPolicyMode instead. | -| Schema | One of: `"DelayDNSResponse"`, `"DelayDeniedPacket"`, `"Inline"`, `"NoDelay"`. | -| Default | `DelayDeniedPacket` | +query: 'dest_labels="application=postgres" AND source_type=net AND action=allow AND proto=tcp AND dest_port=5432' + +aggregateBy: [dest_namespace, dest_name, source_ip] +``` + +## Summary template[​](#summary-template) + +Alerts may include a summary template to provide context for the alerts in the Calico Enterprise web console Alert user interface. Any field in the `aggregateBy` section, or the value of the `metric` may be substituted in the summary using a bracketed variable syntax. + +Example: + +```yaml +summary: 'Observed ${sum} NXDomain responses for ${qname}' +``` + +The `description` field is validated in the same manner. If not provided, the `description` field is used in place of the `summary` field. + +## Period and lookback[​](#period-and-lookback) + +The interval between alerts, and the amount of data considered by the alert may be controlled using the `period` and `lookback` parameters respectively. These fields are formatted as [duration](https://golang.org/pkg/time/#ParseDuration) strings. + +> A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". + +The minimum duration of a period is 1 minute with a default of 5 minutes and the default for lookback is 10 minutes. The lookback should always be greater than the sum of the period and the configured `FlowLogsFlushInterval` or `DNSLogsFlushInterval` as appropriate to avoid gaps in coverage. + +## Alert records[​](#alert-records) + +With only aggregations and no metrics, the alert will generate one event per aggregation pattern returned by the query. The record field will contain only the aggregated fields. As before, this should be used with specific queries. + +The addition of a metric will include the value of that metric in the record, along with any aggregations. This, combined with queries as necessary, will yield the best results in most cases. + +With no aggregations the alert will generate one event per record returned by the query. The record will be included in its entirety in the record field of the event. This should only be used with very narrow and specific queries. + +## Templates[​](#templates) + +Calico Enterprise supports the `GlobalAlertTemplate` resource type. These are used in the Calico Enterprise web console to create alerts with prepopulated fields that can be modified to suit your needs. The `GlobalAlertTemplate` resource is configured identically to the `GlobalAlert` resource. Calico Enterprise includes some sample Alert templates; add your own templates as needed. + +### Sample YAML[​](#sample-yaml-1) + +**RuleBased GlobalAlert** + +```yaml +apiVersion: projectcalico.org/v3 + +kind: GlobalAlertTemplate + +metadata: + + name: http.connections + +spec: + + description: 'HTTP connections to a target namespace' + + summary: 'HTTP connections from ${source_namespace}/${source_name_aggr} to /${dest_name_aggr}' + + severity: 50 + + dataSet: flows + + query: dest_namespace="" AND dest_port=80 + + aggregateBy: [source_namespace, dest_name_aggr, source_name_aggr] + + field: count + + metric: sum + + condition: gte + + threshold: 1 +``` + +## Appendix: Valid fields for queries[​](#appendix-valid-fields-for-queries) + +### Audit logs[​](#audit-logs) + +See [audit.k8s.io group v1](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/apis/audit/v1/types.go) for descriptions of fields. + +### DNS logs[​](#dns-logs) + +See [DNS logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/dns/dns-logs) for description of fields. + +### Flow logs[​](#flow-logs) + +See [Flow logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/flow/datatypes) for description of fields. + +### L7 logs[​](#l7-logs) + +See [L7 logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/l7/datatypes) for description of fields. + +### Global network policy + + + + + + + + + + + + + + + + + + + +A global network policy resource (`GlobalNetworkPolicy`) represents an ordered set of rules which are applied to a collection of endpoints that match a [label selector](#selector). + +`GlobalNetworkPolicy` is not a namespaced resource. `GlobalNetworkPolicy` applies to [workload endpoint resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint) in all namespaces, and to [host endpoint resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint). Select a namespace in a `GlobalNetworkPolicy` in the standard selector by using `projectcalico.org/namespace` as the label name and a `namespace` name as the value to compare against, e.g., `projectcalico.org/namespace == "default"`. See [network policy resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) for namespaced network policy. + +`GlobalNetworkPolicy` resources can be used to define network connectivity rules between groups of Calico Enterprise endpoints and host endpoints, and take precedence over [Profile resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) if any are defined. + +GlobalNetworkPolicies are organized into [tiers](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), which provide an additional layer of ordering—in particular, note that the `Pass` action skips to the next [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), to enable hierarchical security policy. + +For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `globalnetworkpolicy.projectcalico.org`, `globalnetworkpolicies.projectcalico.org` and abbreviations such as `globalnetworkpolicy.p` and `globalnetworkpolicies.p`. + +## Sample YAML[​](#sample-yaml) + +This sample policy allows TCP traffic from `frontend` endpoints to port 6379 on `database` endpoints. + +```yaml +apiVersion: projectcalico.org/v3 + +kind: GlobalNetworkPolicy + +metadata: + + name: internal-access.allow-tcp-6379 + +spec: + + tier: internal-access + + selector: role == 'database' + + types: + + - Ingress + + - Egress + + ingress: + + - action: Allow + + metadata: + + annotations: + + from: frontend + + to: database + + protocol: TCP + + source: + + selector: role == 'frontend' + + destination: + + ports: + + - 6379 + + egress: + + - action: Allow +``` + +## Definition[​](#definition) + +### Metadata[​](#metadata) + +| Field | Description | Accepted Values | Schema | Default | +| ----- | ----------------------------------------- | --------------------------------------------------- | ------ | ------- | +| name | The name of the network policy. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | | + +### Spec[​](#spec) + +| Field | Description | Accepted Values | Schema | Default | +| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------- | --------------------- | --------------------------------------------- | +| order | Controls the order of precedence. Calico Enterprise applies the policy with the lowest value first. | | float | | +| tier | Name of the [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier) this policy belongs to. | | string | `default` | +| selector | Selects the endpoints to which this policy applies. | | [selector](#selector) | all() | +| serviceAccountSelector | Selects the service account(s) to which this policy applies. Select all service accounts in the cluster with a specific name using the `projectcalico.org/name` label. | | [selector](#selector) | all() | +| namespaceSelector | Selects the namespace(s) to which this policy applies. Select a specific namespace by name using the `projectcalico.org/name` label. | | [selector](#selector) | all() | +| types | Applies the policy based on the direction of the traffic. To apply the policy to inbound traffic, set to `Ingress`. To apply the policy to outbound traffic, set to `Egress`. To apply the policy to both, set to `Ingress, Egress`. | `Ingress`, `Egress` | List of strings | Depends on presence of ingress/egress rules\* | +| ingress | Ordered list of ingress rules applied by policy. | | List of [Rule](#rule) | | +| egress | Ordered list of egress rules applied by this policy. | | List of [Rule](#rule) | | +| doNotTrack\*\* | Indicates to apply the rules in this policy before any data plane connection tracking, and that packets allowed by these rules should not be tracked. | true, false | boolean | false | +| preDNAT\*\* | Indicates to apply the rules in this policy before any DNAT. | true, false | boolean | false | +| applyOnForward\*\* | Indicates to apply the rules in this policy on forwarded traffic as well as to locally terminated traffic. | true, false | boolean | false | +| performanceHints | Contains a list of hints to Calico's policy engine to help process the policy more efficiently. Hints never change the enforcement behaviour of the policy. The available hints are described [below](#performance-hints). | `AssumeNeededOnEveryNode` | List of strings | | + +\* If `types` has no value, Calico Enterprise defaults as follows. + +> | Ingress Rules Present | Egress Rules Present | `Types` value | +> | --------------------- | -------------------- | ----------------- | +> | No | No | `Ingress` | +> | Yes | No | `Ingress` | +> | No | Yes | `Egress` | +> | Yes | Yes | `Ingress, Egress` | + +\*\* The `doNotTrack` and `preDNAT` and `applyOnForward` fields are meaningful only when applying policy to a [host endpoint](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint). + +Only one of `doNotTrack` and `preDNAT` may be set to `true` (in a given policy). If they are both `false`, or when applying the policy to a [workload endpoint](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint), the policy is enforced after connection tracking and any DNAT. + +`applyOnForward` must be set to `true` if either `doNotTrack` or `preDNAT` is `true` because for a given policy, any untracked rules or rules before DNAT will in practice apply to forwarded traffic. + +See [Policy for hosts](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/) for how `doNotTrack` and `preDNAT` and `applyOnForward` can be useful for host endpoints. + +### Rule[​](#rule) + +A single rule matches a set of packets and applies some action to them. When multiple rules are specified, they are executed in order. + +| Field | Description | Accepted Values | Schema | Default | +| ----------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------ | ----------------------------- | ------- | +| metadata | Per-rule metadata. | | [RuleMetadata](#rulemetadata) | | +| action | Action to perform when matching this rule. | `Allow`, `Deny`, `Log`, `Pass` | string | | +| protocol | Positive protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | +| notProtocol | Negative protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | +| icmp | ICMP match criteria. | | [ICMP](#icmp) | | +| notICMP | Negative match on ICMP. | | [ICMP](#icmp) | | +| ipVersion | Positive IP version match. | `4`, `6` | integer | | +| source | Source match parameters. | | [EntityRule](#entityrule) | | +| destination | Destination match parameters. | | [EntityRule](#entityrule) | | +| http | Match HTTP request parameters. Application layer policy must be enabled to use this field. | | [HTTPMatch](#httpmatch) | | + +After a `Log` action, processing continues with the next rule; `Allow` and `Deny` are immediate and final and no further rules are processed. + +An `action` of `Pass` in a `NetworkPolicy` or `GlobalNetworkPolicy` will skip over the remaining policies and jump to the first [profile](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) assigned to the endpoint, applying the policy configured in the profile; if there are no Profiles configured for the endpoint the default applied action is `Deny`. + +### RuleMetadata[​](#rulemetadata) + +Metadata associated with a specific rule (rather than the policy as a whole). The contents of the metadata does not affect how a rule is interpreted or enforced; it is simply a way to store additional information for use by operators or applications that interact with Calico Enterprise. + +| Field | Description | Schema | Default | +| ----------- | ----------------------------------- | ----------------------- | ------- | +| annotations | Arbitrary non-identifying metadata. | map of string to string | | + +Example: + +```yaml +metadata: + + annotations: + + app: database + + owner: devops +``` + +Annotations follow the [same rules as Kubernetes for valid syntax and character set](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set). + +On Linux with the iptables data plane, rule annotations are rendered as comments in the form `-m comment --comment "="` on the iptables rule(s) that correspond to the Calico Enterprise rule. + +### ICMP[​](#icmp) + +| Field | Description | Accepted Values | Schema | Default | +| ----- | ------------------- | -------------------- | ------- | ------- | +| type | Match on ICMP type. | Can be integer 0-254 | integer | | +| code | Match on ICMP code. | Can be integer 0-255 | integer | | + +### EntityRule[​](#entityrule) + +Entity rules specify the attributes of the source or destination of a packet that must match for the rule as a whole to match. Packets can be matched on combinations of: + +- Identity of the source/destination, by using [Selectors](#selectors) or by specifying a particular Kubernetes `Service`. Selectors can match [workload endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint), [host endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint) and ([namespaced](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkset) or [global](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset)) network sets. +- Source/destination IP address, protocol and port. + +If the rule contains multiple match criteria (for example, an IP and a port) then all match criteria must match for the rule as a whole to match a packet. + +| Field | Description | Accepted Values | Schema | Default | +| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------------------------------------------- | ------- | +| nets | Match packets with IP in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs | | +| notNets | Negative match on CIDRs. Match packets with IP not in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs | | +| selector | Positive match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | | +| notSelector | Negative match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | | +| namespaceSelector | Positive match on selected namespaces. If specified, only workload endpoints in the selected Kubernetes namespaces are matched. Matches namespaces based on the labels that have been applied to the namespaces. Defines the scope that selectors will apply to, if not defined then selectors apply to the NetworkPolicy's namespace. Match a specific namespace by name using the `projectcalico.org/name` label. Select the non-namespaced resources like GlobalNetworkSet(s), host endpoints to which this policy applies by using `global()` selector. | Valid selector | [selector](#selector) | | +| ports | Positive match on the specified ports | | list of [ports](#ports) | | +| domains | Positive match on [domain names](#exact-and-wildcard-domain-names). | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list of strings | | +| notPorts | Negative match on the specified ports | | list of [ports](#ports) | | +| serviceAccounts | Match endpoints running under service accounts. If a `namespaceSelector` is also defined, the set of service accounts this applies to is limited to the service accounts in the selected namespaces. | | [ServiceAccountMatch](#serviceaccountmatch) | | +| services | Match the specified service(s). If specified on egress rule destinations, no other selection criteria can be set. If specified on ingress rule sources, only positive or negative matches on ports can be specified. | | [ServiceMatch](#servicematch) | | + +> **SECONDARY:** You cannot mix IPv4 and IPv6 CIDRs in a single rule using `nets` or `notNets`. If you need to match both, create 2 rules. + +#### Selector performance in EntityRules[​](#selector-performance-in-entityrules) + +When rendering policy into the data plane, Calico Enterprise must identify the endpoints that match the selectors in all active rules. This calculation is optimized for certain common selector types. Using the optimized selector types reduces CPU usage (and policy rendering time) by orders of magnitude. This becomes important at high scale (hundreds of active rules, hundreds of thousands of endpoints). + +The optimized operators are as follows: + +- `label == "value"` +- `label in { 'v1', 'v2' }` +- `has(label)` +- ` && ` is optimized if **either** `` or `` is optimized. + +The following perform like `has(label)`. All endpoints with the label will be scanned to find matches: + +- `label contains 's'` +- `label starts with 's'` +- `label ends with 's'` + +The other operators, and in particular, `all()`, `!`, `||` and `!=` are not optimized. + +Examples: + +- `a == 'b'` - optimized +- `a == 'b' && has(c)` - optimized +- `a == 'b' || has(c)` - **not** optimized due to use of `||` +- `c != 'd'` - **not** optimized due to use of `!=` +- `!has(a)` - **not** optimized due to use of `!` +- `a == 'b' && c != 'd'` - optimized, `a =='b'` is optimized so `a == 'b' && ` is optimized. +- `c != 'd' && a == 'b'` - optimized, `a =='b'` is optimized so ` && a == 'b'` is optimized. + +### Exact and wildcard domain names[​](#exact-and-wildcard-domain-names) + +The `domains` field is only valid for egress Allow rules. It restricts the rule to apply only to traffic to one of the specified domains. If this field is specified, the parent [Rule](#rule)'s `action` must be `Allow`, and `nets` and `selector` must both be left empty. + +When a configured domain name has no wildcard (`*`), it matches exactly that domain name. For example: + +- `microsoft.com` +- `tigera.io` + +With a single asterisk in any part of the domain name, it matches 1 or more path components at that position. For example: + +- `*.google.com` matches `www.google.com` and `www.ipv6.google.com`, but not `google.com` +- `www.*.com` matches `www.sun.com` and `www.apple.com`, but not `www.com` +- `update.*.mycompany.com` matches `update.tools.mycompany.com`, `update.secure.suite.mycompany.com`, and so on + +**Not** supported are: + +- Multiple wildcards in the same domain, for example: `*.*.mycompany.com` +- Asterisks that are not the entire component, for example: `www.g*.com` +- A wildcard as the last component, for example: `www.mycompany.*` +- More general wildcards, such as regular expressions + +> **SECONDARY:** Calico Enterprise implements policy for domain names by learning the corresponding IPs from DNS, then programming rules to allow those IPs. This means that if multiple domain names A, B and C all map to the same IP, and there is domain-based policy to allow A, traffic to B and C will be allowed as well. + +### Selector[​](#selector) + +A label selector is an expression which either matches or does not match a resource based on its labels. + +Calico Enterprise label selectors support a number of operators, which can be combined into larger expressions using the boolean operators and parentheses. + +| Expression | Meaning | +| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| **Logical operators** | | +| `( )` | Matches if and only if `` matches. (Parentheses are used for grouping expressions.) | +| `! ` | Matches if and only if `` does not match. **Tip:** `!` is a special character at the start of a YAML string, if you need to use `!` at the start of a YAML string, enclose the string in quotes. | +| ` && ` | "And": matches if and only if both ``, and, `` matches | +| ` \|\| ` | "Or": matches if and only if either ``, or, `` matches. | +| **Match operators** | | +| `all()` | Match all in-scope resources. To match *no* resources, combine this operator with `!` to form `!all()`. | +| `global()` | Match all non-namespaced resources. Useful in a `namespaceSelector` to select global resources such as global network sets. | +| `k == 'v'` | Matches resources with the label 'k' and value 'v'. | +| `k != 'v'` | Matches resources without label 'k' or with label 'k' and value *not* equal to `v` | +| `has(k)` | Matches resources with label 'k', independent of value. To match pods that do not have label `k`, combine this operator with `!` to form `!has(k)` | +| `k in { 'v1', 'v2' }` | Matches resources with label 'k' and value in the given set | +| `k not in { 'v1', 'v2' }` | Matches resources without label 'k' or with label 'k' and value *not* in the given set | +| `k contains 's'` | Matches resources with label 'k' and value containing the substring 's' | +| `k starts with 's'` | Matches resources with label 'k' and value starting with the substring 's' | +| `k ends with 's'` | Matches resources with label 'k' and value ending with the substring 's' | + +Operators have the following precedence: + +- **Highest**: all the match operators +- Parentheses `( ... )` +- Negation with `!` +- Conjunction with `&&` +- **Lowest**: Disjunction with `||` + +For example, the expression + +```text +! has(my-label) || my-label starts with 'prod' && role in {'frontend','business'} +``` + +Would be "bracketed" like this: + +```text +((!(has(my-label)) || ((my-label starts with 'prod') && (role in {'frontend','business'})) +``` + +It would match: + +- Any resource that did not have label "my-label". + +- Any resource that both: + + + + - Has a value for `my-label` that starts with "prod", and, + - Has a role label with value either "frontend", or "business". + +Understanding scopes and the `all()` and `global()` operators: selectors have a scope of resources that they are matched against, which depends on the context in which they are used. For example: + +- The `nodeSelector` in an `IPPool` selects over `Node` resources. + +- The top-level selector in a `NetworkPolicy` selects over the workloads *in the same namespace* as the `NetworkPolicy`. + +- The top-level selector in a `GlobalNetworkPolicy` doesn't have the same restriction, it selects over all endpoints including namespaced `WorkloadEndpoint`s and non-namespaced `HostEndpoint`s. + +- The `namespaceSelector` in a `NetworkPolicy` (or `GlobalNetworkPolicy`) *rule* selects over the labels on namespaces rather than workloads. + +- The `namespaceSelector` determines the scope of the accompanying `selector` in the entity rule. If no `namespaceSelector` is present then the rule's `selector` matches the default scope for that type of policy. (This is the same namespace for `NetworkPolicy` and all endpoints/network sets for `GlobalNetworkPolicy`) + +- The `global()` operator can be used (only) in a `namespaceSelector` to change the scope of the main `selector` to include non-namespaced resources such as [GlobalNetworkSet](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset). This allows namespaced `NetworkPolicy` resources to refer to global non-namespaced resources, which would otherwise be impossible. + +### Ports[​](#ports) + +Calico Enterprise supports the following syntaxes for expressing ports. + +| Syntax | Example | Description | +| --------- | ---------- | ------------------------------------------------------------------- | +| int | 80 | The exact (numeric) port specified | +| start:end | 6040:6050 | All (numeric) ports within the range start ≤ x ≤ end | +| string | named-port | A named port, as defined in the ports list of one or more endpoints | + +An individual numeric port may be specified as a YAML/JSON integer. A port range or named port must be represented as a string. For example, this would be a valid list of ports: + +```yaml +ports: [8080, '1234:5678', 'named-port'] +``` + +#### Named ports[​](#named-ports) + +Using a named port in an `EntityRule`, instead of a numeric port, gives a layer of indirection, allowing for the named port to map to different numeric values for each endpoint. + +For example, suppose you have multiple HTTP servers running as workloads; some exposing their HTTP port on port 80 and others on port 8080. In each workload, you could create a named port called `http-port` that maps to the correct local port. Then, in a rule, you could refer to the name `http-port` instead of writing a different rule for each type of server. + +> **SECONDARY:** Since each named port may refer to many endpoints (and Calico Enterprise has to expand a named port into a set of endpoint/port combinations), using a named port is considerably more expensive in terms of CPU than using a simple numeric port. We recommend that they are used sparingly, only where the extra indirection is required. + +### ServiceAccountMatch[​](#serviceaccountmatch) + +A ServiceAccountMatch matches service accounts in an EntityRule. + +| Field | Description | Schema | +| -------- | ------------------------------- | --------------------- | +| names | Match service accounts by name | list of strings | +| selector | Match service accounts by label | [selector](#selector) | + +### ServiceMatch[​](#servicematch) + +A ServiceMatch matches a service in an EntityRule. + +| Field | Description | Schema | +| --------- | ------------------------ | ------ | +| name | The service's name. | string | +| namespace | The service's namespace. | string | + +### Performance Hints[​](#performance-hints) + +Performance hints provide a way to tell Calico Enterprise about the intended use of the policy so that it may process it more efficiently. Currently only one hint is defined: + +- `AssumeNeededOnEveryNode`: normally, Calico Enterprise only calculates a policy's rules and selectors on nodes where the policy is actually in use (i.e. its selector matches a local endpoint). This saves work in most cases. The `AssumeNeededOnEveryNode` hint tells Calico Enterprise to treat the policy as "in use" on *every* node. This is useful for large policy sets that are known to apply to all (or nearly all) endpoints. It effectively "preloads" the policy on every node so that there is less work to do when the first endpoint matching the policy shows up. It also prevents work from being done to tear down the policy when the last endpoint is drained. + +## Application layer policy[​](#application-layer-policy) + +Application layer policy is an optional feature of Calico Enterprise and [must be enabled](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp) to use the following match criteria. + +> **SECONDARY:** Application layer policy match criteria are supported with the following restrictions. +> +> - Only ingress policy is supported. Egress policy must not contain any application layer policy match clauses. +> - Rules must have the action `Allow` if they contain application layer policy match clauses. + +### HTTPMatch[​](#httpmatch) + +An HTTPMatch matches attributes of an HTTP request. The presence of an HTTPMatch clause on a Rule will cause that rule to only match HTTP traffic. Other application layer protocols will not match the rule. + +Example: + +```yaml +http: + + methods: ['GET', 'PUT'] + + paths: + + - exact: '/projects/calico' + + - prefix: '/users' + + headers: + + - header: 'x-forwarded-for' + + operator: 'HasPrefix' + + values: ['192.168.0.1', '192.168.0.254'] +``` + +| Field | Description | Schema | +| ------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------- | +| methods | Match HTTP methods. Case sensitive. [Standard HTTP method descriptions.](https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html) | list of strings | +| paths | Match HTTP paths. Case sensitive. | list of [HTTPPathMatch](#httppathmatch) | +| headers | Match HTTP headers. | list of [HTTPHeaderMatch](#httpheadermatch) | + +### HTTPPathMatch[​](#httppathmatch) + +| Syntax | Example | Description | +| ------ | ------------------- | ------------------------------------------------------------------------------- | +| exact | `exact: "/foo/bar"` | Matches the exact path as written, not including the query string or fragments. | +| prefix | `prefix: "/keys"` | Matches any path that begins with the given prefix. | + +### HTTPHeaderMatch[​](#httpheadermatch) + +| Syntax | Example | Description | +| -------- | ---------------------------------- | ------------------------------------------------------------------------------------------ | +| header | `x-forwarded-for` | Name of a HTTP header. Header names are case insensitive, please use lowercase characters. | +| operator | `In` | Operator name to apply to the HTTP header value. Case sensitive. | +| values | `['192.168.0.1', '192.168.0.254']` | Values that the operator will test the HTTP header value against. Case sensitive. | + +The following operators are allowed: + +- `Exists`: matches the HTTP request header if the specified header exists. `values` are ignored. +- `DoesNotExist`: matches the HTTP request header if the specified header does not exist. `values` are ignored. +- `HasPrefix`: matches the HTTP request header if the specified header has a prefix from any the values provided. +- `HasSuffix`: matches the HTTP request header if the specified header has a suffix from any the values provided. +- `In`: matches the HTTP request header if its value is in the set of values provided. +- `NotIn`: matches the HTTP request header if its value is not in the set of values provided. +- `MatchesRegex`: matches the HTTP request header if its value matches any of the regular expressions in values field. + +## Supported operations[​](#supported-operations) + +| Datastore type | Create/Delete | Update | Get/List | Notes | +| ------------------------ | ------------- | ------ | -------- | ----- | +| Kubernetes API datastore | Yes | Yes | Yes | | + +#### List filtering on tiers[​](#list-filtering-on-tiers) + +List and watch operations may specify label selectors or field selectors to filter `GlobalNetworkPolicy` resources on tiers returned by the API server. When no selector is specified, the API server returns all `GlobalNetworkPolicy` resources from all tiers that the user has access to. + +##### Field selector[​](#field-selector) + +When using the field selector, supported operators are `=` and `==` + +The following example shows how to retrieve all `GlobalNetworkPolicy` resources in the default tier: + +```bash +kubectl get globalnetworkpolicy --field-selector spec.tier=default +``` + +##### Label selector[​](#label-selector) + +When using the label selector, supported operators are `=`, `==` and `IN`. + +The following example shows how to retrieve all `GlobalNetworkPolicy` resources in the `default` and `net-sec` tiers: + +```bash +kubectl get globalnetworkpolicy -l 'projectcalico.org/tier in (default, net-sec)' +``` + +### Global network set + + + +A global network set resource (GlobalNetworkSet) represents an arbitrary set of IP subnetworks/CIDRs, allowing it to be matched by Calico Enterprise policy. Network sets are useful for applying policy to traffic coming from (or going to) external, non-Calico Enterprise, networks. + +GlobalNetworkSets can also include domain names, whose effect is to allow egress traffic to those domain names, when the GlobalNetworkSet is matched by the destination selector of an egress rule with action Allow. Domain names have no effect in ingress rules, or in a rule whose action is not Allow. + +> **SECONDARY:** Calico Enterprise implements policy for domain names by learning the corresponding IPs from DNS, then programming rules to allow those IPs. This means that if multiple domain names A, B and C all map to the same IP, and there is domain-based policy to allow A, traffic to B and C will be allowed as well. + +The metadata for each network set includes a set of labels. When Calico Enterprise is calculating the set of IPs that should match a source/destination selector within a [global network policy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy) rule, or within a [network policy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) rule whose `namespaceSelector` includes `global()`, it includes the CIDRs from any network sets that match the selector. + +> **SECONDARY:** Since Calico Enterprise matches packets based on their source/destination IP addresses, Calico Enterprise rules may not behave as expected if there is NAT between the Calico Enterprise-enabled node and the networks listed in a network set. For example, in Kubernetes, incoming traffic via a service IP is typically SNATed by the kube-proxy before reaching the destination host so Calico Enterprise's workload policy will see the kube-proxy's host's IP as the source instead of the real source. For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `globalnetworkset.projectcalico.org`, `globalnetworksets.projectcalico.org` and abbreviations such as `globalnetworkset.p` and `globalnetworksets.p`. + +## Sample YAML[​](#sample-yaml) + +```yaml +apiVersion: projectcalico.org/v3 + +kind: GlobalNetworkSet + +metadata: + + name: a-name-for-the-set + + labels: + + role: external-database + +spec: + + nets: + + - 198.51.100.0/28 + + - 203.0.113.0/24 + + allowedEgressDomains: + + - db.com + + - '*.db.com' +``` + +## Global network set definition[​](#global-network-set-definition) + +### Metadata[​](#metadata) + +| Field | Description | Accepted Values | Schema | +| ------ | ------------------------------------------ | ------------------------------------------------- | ------ | +| name | The name of this network set. | Lower-case alphanumeric with optional `-` or `-`. | string | +| labels | A set of labels to apply to this endpoint. | | map | + +### Spec[​](#spec) + +| Field | Description | Accepted Values | Schema | Default | +| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------- | ------ | ------- | +| nets | The IP networks/CIDRs to include in the set. | Valid IPv4 or IPv6 CIDRs, for example "192.0.2.128/25" | list | | +| allowedEgressDomains | The list of domain names that belong to this set and are honored in egress allow rules only. Domain names specified here only work to allow egress traffic from the cluster to external destinations. They don't work to *deny* traffic to destinations specified by domain name, or to allow ingress traffic from *sources* specified by domain name. | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list | | + +### Exact and wildcard domain names[​](#exact-and-wildcard-domain-names) + +When a configured domain name has no wildcard (`*`), it matches exactly that domain name. For example: + +- `microsoft.com` +- `tigera.io` + +With a single asterisk in any part of the domain name, it matches 1 or more path components at that position. For example: + +- `*.google.com` matches `www.google.com` and `www.ipv6.google.com`, but not `google.com` +- `www.*.com` matches `www.sun.com` and `www.apple.com`, but not `www.com` +- `update.*.mycompany.com` matches `update.tools.mycompany.com`, `update.secure.suite.mycompany.com`, and so on + +**Not** supported are: + +- Multiple wildcards in the same domain, for example: `*.*.mycompany.com` +- Asterisks that are not the entire component, for example: `www.g*.com` +- A wildcard as the last component, for example: `www.mycompany.*` +- More general wildcards, such as regular expressions + +### Global report + +A global report resource is a configuration for generating compliance reports. A global report configuration in Calico Enterprise lets you: + +- Specify report contents, frequency, and data filtering +- Specify the node(s) on which to run the report generation jobs +- Enable/disable creation of new jobs for generating the report + +For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `globalreport.projectcalico.org`, `globalreports.projectcalico.org` and abbreviations such as `globalreport.p` and `globalreports.p`. + +## Sample YAML[​](#sample-yaml) + +```yaml +apiVersion: projectcalico.org/v3 + +kind: GlobalReport + +metadata: + + name: weekly-full-inventory + +spec: + + reportType: inventory + + schedule: 0 0 * * 0 + + jobNodeSelector: + + nodetype: infrastructure + +--- + +apiVersion: projectcalico.org/v3 + +kind: GlobalReport + +metadata: + + name: hourly-accounts-networkaccess + +spec: + + reportType: network-access + + endpoints: + + namespaces: + + names: ['payable', 'collections', 'payroll'] + + schedule: 0 * * * * + +--- + +apiVersion: projectcalico.org/v3 + +kind: GlobalReport + +metadata: + + name: monthly-widgets-controller-tigera-policy-audit + +spec: + + reportType: policy-audit + + schedule: 0 0 1 * * + + endpoints: + + serviceAccounts: + + names: ['controller'] + + namespaces: + + names: ['widgets'] + +--- + +apiVersion: projectcalico.org/v3 + +kind: GlobalReport + +metadata: + + name: daily-cis-benchmark + +spec: + + reportType: cis-benchmark + + schedule: 0 0 * * * + + cis: + + resultsFilters: + + - benchmarkSelection: { kubernetesVersion: '1.13' } + + exclude: ['1.1.4', '1.2.5'] +``` + +## GlobalReport Definition[​](#globalreport-definition) + +### Metadata[​](#metadata) + +| Field | Description | Accepted Values | Schema | +| ------ | ---------------------------------------- | ------------------------------------------------ | ------ | +| name | The name of this report. | Lower-case alphanumeric with optional `-` or `.` | string | +| labels | A set of labels to apply to this report. | | map | + +### Spec[​](#spec) + +| Field | Description | Required | Accepted Values | Schema | +| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | +| reportType | The type of report to produce. This field controls the content of the report - see the links for each type for more details. | Yes | [cis‑benchmark](https://docs.tigera.io/calico-enterprise/latest/reference/resources/compliance-reports/cis-benchmark), [inventory](https://docs.tigera.io/calico-enterprise/latest/reference/resources/compliance-reports/inventory), [network‑access](https://docs.tigera.io/calico-enterprise/latest/reference/resources/compliance-reports/network-access), [policy‑audit](https://docs.tigera.io/calico-enterprise/latest/reference/resources/compliance-reports/policy-audit) | string | +| endpoints | Specify which endpoints are in scope. If omitted, selects everything. | | | [EndpointsSelection](#endpointsselection) | +| schedule | Configure report frequency by specifying start and end time in [cron-format](https://en.wikipedia.org/wiki/Cron). Reports are started 30 minutes (configurable) after the scheduled value to allow enough time for data archival. A maximum limit of 12 schedules per hour is enforced (an average of one report every 5 minutes). | Yes | | string | +| jobNodeSelector | Specify the node(s) for scheduling the report jobs using selectors. | | | map | +| suspend | Disable future scheduled report jobs. In-flight reports are not affected. | | | bool | +| cis | Parameters related to generating a CIS benchmark report. | | | [CISBenchmarkParams](#cisbenchmarkparams) | + +### EndpointsSelection[​](#endpointsselection) + +| Field | Description | Schema | +| --------------- | ------------------------------------------------------------------------------------------- | ------------------------------------------- | +| selector | Endpoint label selector to restrict endpoint selection. | string | +| namespaces | Namespace name and label selector to restrict endpoints by selected namespaces. | [NamesAndLabelsMatch](#namesandlabelsmatch) | +| serviceAccounts | Service account name and label selector to restrict endpoints by selected service accounts. | [NamesAndLabelsMatch](#namesandlabelsmatch) | + +### CISBenchmarkParams[​](#cisbenchmarkparams) + +| Fields | Description | Required | Schema | +| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------------------------------- | +| highThreshold | Integer percentage value that determines the lower limit of passing tests to consider a node as healthy. Default: 100 | No | int | +| medThreshold | Integer percentage value that determines the lower limit of passing tests to consider a node as unhealthy. Default: 50 | No | int | +| includeUnscoredTests | Boolean value that when false, applies a filter to exclude tests that are marked as “Unscored” by the CIS benchmark standard. If true, the tests will be included in the report. Default: false | No | bool | +| numFailedTests | Integer value that sets the number of tests to display in the Top-failed Tests section of the CIS benchmark report. Default: 5 | No | int | +| resultsFilters | Specifies an include or exclude filter to apply on the test results that will appear on the report. | No | [CISBenchmarkFilter](#cisbenchmarkfilter) | + +### CISBenchmarkFilter[​](#cisbenchmarkfilter) + +| Fields | Description | Required | Schema | +| ------------------ | ---------------------------------------------------------------------------------------------- | -------- | ----------------------------------------------- | +| benchmarkSelection | Specify which set of benchmarks that this filter should apply to. Selects all benchmark types. | No | [CISBenchmarkSelection](#cisbenchmarkselection) | +| exclude | Specify which benchmark tests to exclude | No | array of strings | +| include | Specify which benchmark tests to include only (higher precedence than exclude) | No | array of strings | + +### CISBenchmarkSelection[​](#cisbenchmarkselection) + +| Fields | Description | Required | Schema | +| ----------------- | -------------------------------------- | -------- | ------ | +| kubernetesVersion | Specifies a version of the benchmarks. | Yes | string | + +### NamesAndLabelsMatch[​](#namesandlabelsmatch) + +| Field | Description | Schema | +| -------- | ------------------------------------ | ------ | +| names | Set of resource names. | list | +| selector | Selects a set of resources by label. | string | + +Use the `NamesAndLabelsMatch`to limit the scope of endpoints. If both `names` and `selector` are specified, the resource is identified using label *AND* name match. + +> **SECONDARY:** To use the Calico Enterprise compliance reporting feature, you must ensure all required resource types are being audited and the logs archived in Elasticsearch. You must explicitly configure the [Kubernetes API Server](https://docs.tigera.io/calico-enterprise/latest/observability/kube-audit) to send audit logs for Kubernetes-owned resources to Elasticsearch. + +## Supported operations[​](#supported-operations) + +| Datastore type | Create/Delete | Update | Get/List | Notes | +| --------------------- | ------------- | ------ | -------- | ----- | +| Kubernetes API server | Yes | Yes | Yes | | + +### Global threat feed + +A global threat feed resource (GlobalThreatFeed) represents a feed of threat intelligence used for security purposes. + +Calico Enterprise supports threat feeds that give either + +- a set of IP addresses or IP prefixes, with content type IPSet, or +- a set of domain names, with content type DomainNameSet + +For each IPSet threat feed, Calico Enterprise automatically monitors flow logs for members of the set. IPSet threat feeds can also be configured to be synchronized to a [global network set](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset), allowing you to use them as a dynamically-updating deny-list by incorporating the global network set into network policy. + +For each DomainNameSet threat feed, Calico Enterprise automatically monitors DNS logs for queries (QNAME) or answers (RR NAME or RDATA) that contain members of the set. + +For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `globalthreatfeed.projectcalico.org`, `globalthreatfeeds.projectcalico.org` and abbreviations such as `globalthreatfeed.p` and `globalthreatfeeds.p`. + +## Sample YAML[​](#sample-yaml) + +```yaml +apiVersion: projectcalico.org/v3 + +kind: GlobalThreatFeed + +metadata: + + name: sample-global-threat-feed + +spec: + + content: IPSet + + mode: Enabled + + description: "This is the sample global threat feed" + + feedType: Custom + + globalNetworkSet: + + # labels to set on the GNS + + labels: + + level: high + + pull: + + # accepts time in golang duration format + + period: 24h + + http: + + format: + + newlineDelimited: {} + + url: https://an.example.threat.feed/deny-list + + headers: + + - name: "Accept" + + value: "text/plain" + + - name: "APIKey" + + valueFrom: + + # secrets selected must be in the "tigera-intrusion-detection" namespace to be used + + secretKeyRef: + + name: "globalthreatfeed-sample-global-threat-feed-example" + + key: "apikey" +``` + +## Push or Pull[​](#push-or-pull) + +You can configure Calico Enterprise to pull updates from your threat feed using a [`pull`](#pull) stanza in the global threat feed spec. + +Alternately, you can have your threat feed push updates directly. Leave out the `pull` stanza, and configure your threat feed to create or update the Elasticsearch document that corresponds to the global threat feed object. + +For IPSet threat feeds, this Elasticsearch document will be in the index `.tigera.ipset.` and must have the ID set to the name of the global threat feed object. The doc should have a single field called `ips`, containing a list of IP prefixes. + +For example: + +```text +PUT .tigera.ipset.cluster01/_doc/sample-global-threat-feed + +{ + + "ips" : ["99.99.99.99/32", "100.100.100.0/24"] + +} +``` + +For DomainNameSet threat feeds, this Elasticsearch document will be in the index `.tigera.domainnameset.` and must have the ID set to the name of the global threat feed object. The doc should have a single field called `domains`, containing a list of domain names. + +For example: + +```text +PUT .tigera.domainnameset.cluster01/_doc/example-global-threat-feed + +{ + + "domains" : ["malware.badstuff", "hackers.r.us"] + +} +``` + +Refer to the [Elasticsearch document APIs](https://www.elastic.co/guide/en/elasticsearch/reference/6.4/docs-update.html) for more information on how to create and update documents in Elasticsearch. + +## GlobalThreatFeed Definition[​](#globalthreatfeed-definition) + +### Metadata[​](#metadata) + +| Field | Description | Accepted Values | Schema | +| ------ | --------------------------------------------- | ----------------------------------------- | ------ | +| name | The name of this threat feed. | Lower-case alphanumeric with optional `-` | string | +| labels | A set of labels to apply to this threat feed. | | map | + +### Spec[​](#spec) + +| Field | Description | Accepted Values | Schema | Default | +| ---------------- | ---------------------------------------------------- | ---------------------- | --------------------------------------------- | ------- | +| content | What kind of threat intelligence is provided | IPSet, DomainNameSet | string | IPSet | +| mode | Determines if the threat feed is Enabled or Disabled | Enabled, Disabled | string | Enabled | +| description | Human-readable description of the template | Maximum 256 characters | string | | +| feedType | Distinguishes Builtin threat feeds from Custom feeds | Builtin, Custom | string | Custom | +| globalNetworkSet | Include to sync with a global network set | | [GlobalNetworkSetSync](#globalnetworksetsync) | | +| pull | Configure periodic pull of threat feed updates | | [Pull](#pull) | | + +### Status[​](#status) + +The `status` is read-only for users and updated by the `intrusion-detection-controller` component as it processes global threat feeds. + +| Field | Description | +| -------------------- | -------------------------------------------------------------------------------- | +| lastSuccessfulSync | Timestamp of the last successful update to the threat intelligence from the feed | +| lastSuccessfulSearch | Timestamp of the last successful search of logs for threats | +| errorConditions | List of errors preventing operation of the updates or search | + +### GlobalNetworkSetSync[​](#globalnetworksetsync) + +When you include a `globalNetworkSet` stanza in a global threat feed, it triggers synchronization with a [global network set](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset). This global network set will have the name `threatfeed.` where `` is the name of the global threat feed it is synced with. This is only supported for threat feeds of type IPSet. + +> **SECONDARY:** A `globalNetworkSet` stanza only works for `IPSet` threat feeds, and you must also include a `pull` stanza. + +| Field | Description | Accepted Values | Schema | +| ------ | --------------------------------------------------------- | --------------- | ------ | +| labels | A set of labels to apply to the synced global network set | | map | + +### Pull[​](#pull) + +When you include a `pull` stanza in a global threat feed, it triggers a periodic pull of new data. On successful pull and update to the data store, we update the `status.lastSuccessfulSync` timestamp. + +If you do not include a `pull` stanza, you must configure your system to [push](#push-or-pull) updates. + +| Field | Description | Accepted Values | Schema | Default | +| ------ | ------------------------------------- | --------------- | ------------------------------------------------------------- | ------- | +| period | How often to pull an update | ≥ 5m | [Duration string](https://golang.org/pkg/time/#ParseDuration) | 24h | +| http | Pull the update from an HTTP endpoint | | [HTTPPull](#httppull) | | + +### HTTPPull[​](#httppull) + +Pull updates from the threat feed by doing an HTTP GET against the given URL. + +| Field | Description | Accepted Values | Schema | +| ------- | --------------------------------------------------------- | --------------- | ------------------------- | +| format | Format of the data the threat feed returns | | [Format](#format) | +| url | The URL to query | | string | +| headers | List of additional HTTP Headers to include on the request | | [HTTPHeader](#httpheader) | + +IPSet threat feeds must contain IP addresses or IP prefixes. For example: + +```text + This is an IP Prefix + +100.100.100.0/24 + + This is an address + +99.99.99.99 +``` + +DomainNameSet threat feeds must contain domain names. For example: + +```text + Suspicious domains + +malware.badstuff + +hackers.r.us +``` + +Internationalized domain names (IDNA) may be encoded either as Unicode in UTF-8 format, or as ASCII-Compatible Encoding (ACE) according to [RFC 5890](https://tools.ietf.org/html/rfc5890). + +### Format[​](#format) + +Several different feed formats are supported. The default, `newlineDelimited`, expects a text file containing entries separated by newline characters. It may also include comments prefixed by `#`. `json` uses a [jsonpath](https://goessner.net/articles/JsonPath/) to extract the desired information from a JSON document. `csv` extracts one column from CSV-formatted data. + +| Field | Description | Schema | +| ---------------- | --------------------------- | ------------- | +| newlineDelimited | Newline-delimited text file | Empty object | +| json | JSON object | [JSON](#json) | +| csv | CSV file | [CSV](#csv) | + +#### JSON[​](#json) + +| Field | Description | Schema | +| ----- | ---------------------------------------------------------------------- | ------ | +| path | [jsonpath](https://goessner.net/articles/JsonPath/) to extract values. | string | + +Values can be extracted from the document using any [jsonpath](https://goessner.net/articles/JsonPath/) expression, subject to the limitations mentioned below, that evaluates to a list of strings. For example: `$.` is valid for `["a", "b", "c"]`, and `$.a` is valid for `{"a": ["b", "c"]}`. + +> **WARNING:** No support for subexpressions and filters. Strings in brackets must use double quotes. It cannot operate on JSON decoded struct fields. + +#### CSV[​](#csv) + +| Field | Description | Schema | +| --------------------------- | ------------------------------------------------------------------------- | ------ | +| fieldNum | Number of column containing values. Mutually exclusive with `fieldName`. | int | +| fieldName | Name of column containing values, requires `header: true`. | string | +| header | Whether or not the document contains a header row. | bool | +| columnDelimiter | An alternative delimiter character, such as `\|`. | string | +| commentDelimiter | Lines beginning with this character are skipped. `#` is common. | string | +| recordSize | The number of columns expected in the document. Auto detected if omitted. | int | +| disableRecordSizeValidation | Disable row size checking. Mutually exclusive with `recordSize`. | bool | + +### HTTPHeader[​](#httpheader) + +| Field | Description | Schema | +| --------- | --------------------------------------------------------- | ------------------------------------- | +| name | Header name | string | +| value | Literal value | string | +| valueFrom | Include to retrieve the value from a config map or secret | [HTTPHeaderSource](#httpheadersource) | + +> **SECONDARY:** You must include either `value` or `valueFrom`, but not both. + +### HTTPHeaderSource[​](#httpheadersource) + +| Field | Description | Schema | +| --------------- | ------------------------------- | ----------------- | +| configMapKeyRef | Get the value from a config map | [KeyRef](#keyref) | +| secretKeyRef | Get the value from a secret | [KeyRef](#keyref) | + +### KeyRef[​](#keyref) + +KeyRef tells Calico Enterprise where to get the value for a header. The referenced Kubernetes object (either a config map or a secret) must be in the `tigera-intrusion-detection` namespace. The referenced Kubernetes object should have a name with following prefix format: `globalthreatfeed--`. + +| Field | Description | Accepted Values | Schema | Default | +| -------- | --------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | ------ | ------- | +| name | The name of the config map or secret | | string | | +| key | The key within the config map or secret | | string | | +| optional | Whether the pull can proceed without the referenced value | If the referenced value does not exist, `true` means omit the header. `false` means abort the entire pull until it exists | bool | `false` | + +### Host endpoint + + + +A host endpoint resource (`HostEndpoint`) represents one or more real or virtual interfaces attached to a host that is running Calico Enterprise. It enforces Calico Enterprise policy on the traffic that is entering or leaving the host's default network namespace through those interfaces. + +- A host endpoint with `interfaceName: *` represents *all* of a host's real or virtual interfaces. + +- A host endpoint for one specific real interface is configured by `interfaceName: `, for example `interfaceName: eth0`, or by leaving `interfaceName` empty and including one of the interface's IPs in `expectedIPs`. + +Each host endpoint may include a set of labels and list of profiles that Calico Enterprise will use to apply [policy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) to the interface. If no profiles or labels are applied, Calico Enterprise will not apply any policy. + +For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `hostendpoint.projectcalico.org`, `hostendpoints.projectcalico.org` and abbreviations such as `hostendpoint.p` and `hostendpoints.p`. + +**Default behavior of external traffic to/from host** + +If a host endpoint is created and network policy is not in place, the Calico Enterprise default is to deny traffic to/from that endpoint (except for traffic allowed by failsafe rules). For a named host endpoint (i.e. a host endpoint representing a specific interface), Calico Enterprise blocks traffic only to/from the interface specified in the host endpoint. Traffic to/from other interfaces is ignored. + +> **SECONDARY:** Host endpoints with `interfaceName: *` do not support [untracked policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/high-connection-workloads). + +For a wildcard host endpoint (i.e. a host endpoint representing all of a host's interfaces), Calico Enterprise blocks traffic to/from *all* interfaces on the host (except for traffic allowed by failsafe rules). + +However, profiles can be used in conjunction with host endpoints to modify default behavior of external traffic to/from the host in the absence of network policy. Calico Enterprise provides a default profile resource named `projectcalico-default-allow` that consists of allow-all ingress and egress rules. Host endpoints with the `projectcalico-default-allow` profile attached will have "allow-all" semantics instead of "deny-all" in the absence of policy. + +Note: If you have custom iptables rules, using host endpoints with allow-all rules (with no policies) will accept all traffic and therefore bypass those custom rules. + +> **SECONDARY:** Auto host endpoints specify the `projectcalico-default-allow` profile so they behave similarly to pod workload endpoints. + +> **SECONDARY:** When rendering security rules on other hosts, Calico Enterprise uses the `expectedIPs` field to resolve label selectors to IP addresses. If the `expectedIPs` field is omitted then security rules that use labels will fail to match this endpoint. + +**Host to local workload traffic**: Traffic from a host to its workload endpoints (e.g. Kubernetes pods) is always allowed, despite any policy in place. This ensures that `kubelet` liveness and readiness probes always work. + +## Sample YAML[​](#sample-yaml) + +```yaml +apiVersion: projectcalico.org/v3 + +kind: HostEndpoint + +metadata: + + name: some.name + + labels: + + type: production + +spec: + + interfaceName: eth0 + + node: myhost + + expectedIPs: + + - 192.168.0.1 + + - 192.168.0.2 + + profiles: + + - profile1 + + - profile2 + + ports: + + - name: some-port + + port: 1234 + + protocol: TCP + + - name: another-port + + port: 5432 + + protocol: UDP +``` + +## Host endpoint definition[​](#host-endpoint-definition) + +### Metadata[​](#metadata) + +| Field | Description | Accepted Values | Schema | +| ------ | ------------------------------------------ | --------------------------------------------------- | ------ | +| name | The name of this hostEndpoint. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | +| labels | A set of labels to apply to this endpoint. | | map | + +### Spec[​](#spec) + +| Field | Description | Accepted Values | Schema | Default | +| ------------- | -------------------------------------------------------------------------- | -------------------------- | -------------------------------------- | ------- | +| node | The name of the node where this HostEndpoint resides. | | string | | +| interfaceName | Either `*` or the name of the specific interface on which to apply policy. | | string | | +| expectedIPs | The expected IP addresses associated with the interface. | Valid IPv4 or IPv6 address | list | | +| profiles | The list of profiles to apply to the endpoint. | | list | | +| ports | List of named ports that this workload exposes. | | List of [EndpointPorts](#endpointport) | | + +### EndpointPort[​](#endpointport) + +An EndpointPort associates a name with a particular TCP/UDP/SCTP port of the endpoint, allowing it to be referenced as a named port in [policy rules](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy#entityrule). + +| Field | Description | Accepted Values | Schema | Default | +| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------- | ------ | ------- | +| name | The name to attach to this port, allowing it to be referred to in [policy rules](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy#entityrule). Names must be unique within an endpoint. | | string | | +| protocol | The protocol of this named port. | `TCP`, `UDP`, `SCTP` | string | | +| port | The workload port number. | `1`-`65535` | int | | + +> **SECONDARY:** On their own, EndpointPort entries don't result in any change to the connectivity of the port. They only have an effect if they are referred to in policy. + +## Supported operations[​](#supported-operations) + +| Datastore type | Create/Delete | Update | Get/List | Notes | +| --------------------- | ------------- | ------ | -------- | ----- | +| Kubernetes API server | Yes | Yes | Yes | | + +### IP pool + + + +An IP pool resource (`IPPool`) represents a collection of IP addresses from which Calico Enterprise expects endpoint IPs to be assigned. + +For `kubectl` commands, the following case-insensitive aliases may be used to specify the resource type on the CLI: `ippool.projectcalico.org`, `ippools.projectcalico.org` as well as abbreviations such as `ippool.p` and `ippools.p`. + +## Sample YAML[​](#sample-yaml) + +```yaml +apiVersion: projectcalico.org/v3 + +kind: IPPool + +metadata: + + name: my.ippool-1 + +spec: + + cidr: 10.1.0.0/16 + + ipipMode: CrossSubnet + + natOutgoing: true + + disabled: false + + nodeSelector: all() + + allowedUses: + + - Workload + + - Tunnel +``` + +## IP pool definition[​](#ip-pool-definition) + +### Metadata[​](#metadata) + +| Field | Description | Accepted Values | Schema | +| ----- | ------------------------------------------- | --------------------------------------------------- | ------ | +| name | The name of this IPPool resource. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | + +### Spec[​](#spec) + +| Field | Description | Accepted Values | Schema | Default | +| ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- | --------------------------------------------- | +| cidr | IP range to use for this pool. | A valid IPv4 or IPv6 CIDR. Subnet length must be at least big enough to fit a single block (by default `/26` for IPv4 or `/122` for IPv6). Must not overlap with the Link Local range `169.254.0.0/16` or `fe80::/10`. | string | | +| blockSize | The CIDR size of allocation blocks used by this pool. Blocks are allocated on demand to hosts and are used to aggregate routes. The value can only be set when the pool is created. | 20 to 32 (inclusive) for IPv4 and 116 to 128 (inclusive) for IPv6 | int | `26` for IPv4 pools and `122` for IPv6 pools. | +| ipipMode | The mode defining when IPIP will be used. Cannot be set at the same time as `vxlanMode`. | Always, CrossSubnet, Never | string | `Never` | +| vxlanMode | The mode defining when VXLAN will be used. Cannot be set at the same time as `ipipMode`. | Always, CrossSubnet, Never | string | `Never` | +| natOutgoing | When enabled, packets sent from Calico Enterprise networked containers in this pool to destinations outside of any Calico IP pools will be masqueraded. | true, false | boolean | `false` | +| disabled | When set to true, Calico Enterprise IPAM will not assign addresses from this pool. | true, false | boolean | `false` | +| disableBGPExport *(since v3.11.0)* | Disable exporting routes from this IP Pool’s CIDR over BGP. | true, false | boolean | `false` | +| nodeSelector | Selects the nodes where Calico Enterprise IPAM should assign pod addresses from this pool. Can be overridden if a pod [explicitly identifies this IP pool by annotation](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration#using-kubernetes-annotations). | | [selector](#node-selector) | all() | +| allowedUses *(since v3.11.0)* | Controls whether the pool will be used for automatic assignments of certain types. See [below](#allowed-uses). | Workload, Tunnel, HostSecondaryInterface, LoadBalancer | list of strings | `["Workload", "Tunnel"]` | +| awsSubnetID *(since v3.11.0)* | May be set to the ID of an AWS VPC Subnet that contains the CIDR of this IP pool to activate the AWS-backed pool feature. See [below](#aws-backed-pools). | Valid AWS Subnet ID. | string | | +| assignmentMode | Controls whether the pool will be used for automatic assignments or only if requested manually | Automatic, Manual | strings | `Automatic` | + +> **SECONDARY:** Do not use a custom `blockSize` until **all** Calico Enterprise components have been updated to a version that supports it (at least v2.3.0). Older versions of components do not understand the field so they may corrupt the IP pool by creating blocks of incorrect size. + +### Allowed uses[​](#allowed-uses) + +When automatically assigning IP addresses to workloads, only pools with "Workload" in their `allowedUses` field are consulted. Similarly, when assigning IPs for tunnel devices, only "Tunnel" pools are eligible. Finally, when assigning IP addresses for AWS secondary ENIs, only pools with allowed use "HostSecondaryInterface" are candidates. + +Combining options for the `allowedUses` field is limited. You can specify only the following options and option combinations: + +- `allowedUses: ["Tunnel","Workload"]` (default) +- `allowedUses: ["Tunnel"]` +- `allowedUses: ["Workload"]` +- `allowedUses: ["LoadBalancer"]` +- `allowedUses: ["HostSecondaryInterface"]` + +If the `allowedUses` field is not specified, it defaults to `["Workload", "Tunnel"]` for compatibility with older versions of Calico. It is not possible to specify a pool with no allowed uses. + +The `allowedUses` field is only consulted for new allocations, changing the field has no effect on previously allocated addresses. + +Calico Enterprise supports Kubernetes [annotations that force the use of specific IP addresses](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration#requesting-a-specific-ip-address). These annotations take precedence over the `allowedUses` field. + +### AWS-backed pools[​](#aws-backed-pools) + +Calico Enterprise supports IP pools that are backed by the AWS fabric. This feature was added in order to support egress gateways on the AWS fabric; the restrictions and requirements are currently documented as part of the [egress gateways on AWS guide](https://docs.tigera.io/calico-enterprise/latest/networking/egress/egress-gateway-aws). + +### IPIP[​](#ipip) + +Routing of packets using IP-in-IP will be used when the destination IP address is in an IP Pool that has IPIP enabled. In addition, if the `ipipMode` is set to `CrossSubnet`, Calico Enterprise will only route using IP-in-IP if the IP address of the destination node is in a different subnet. The subnet of each node is configured on the node resource (which may be automatically determined when running the `node` service). + +For details on configuring IP-in-IP on your deployment, please refer to [Configuring IP-in-IP](https://docs.tigera.io/calico-enterprise/latest/networking/configuring/vxlan-ipip). + +> **SECONDARY:** Setting `natOutgoing` is recommended on any IP Pool with `ipip` enabled. When `ipip` is enabled without `natOutgoing` routing between Workloads and Hosts running Calico Enterprise is asymmetric and may cause traffic to be filtered due to [RPF](https://en.wikipedia.org/wiki/Reverse_path_forwarding) checks failing. + +### VXLAN[​](#vxlan) + +Routing of packets using VXLAN will be used when the destination IP address is in an IP Pool that has VXLAN enabled. In addition, if the `vxlanMode` is set to `CrossSubnet`, Calico Enterprise will only route using VXLAN if the IP address of the destination node is in a different subnet. The subnet of each node is configured on the node resource (which may be automatically determined when running the `node` service). + +> **SECONDARY:** Setting `natOutgoing` is recommended on any IP Pool with `vxlan` enabled. When `vxlan` is enabled without `natOutgoing` routing between Workloads and Hosts running Calico Enterprise is asymmetric and may cause traffic to be filtered due to [RPF](https://en.wikipedia.org/wiki/Reverse_path_forwarding) checks failing. + +### Block sizes[​](#block-sizes) + +The default block sizes of `26` for IPv4 and `122` for IPv6 provide blocks of 64 addresses. This allows addresses to be allocated in groups to workloads running on the same host. By grouping addresses, fewer routes need to be exchanged between hosts and to other BGP peers. If a host allocates all of the addresses in a block then it will be allocated an additional block. If there are no more blocks available then the host can take addresses from blocks allocated to other hosts. Specific routes are added for the borrowed addresses which has an impact on route table size. + +Increasing the block size from the default (e.g., using `24` for IPv4 to give 256 addresses per block) means fewer blocks per host, and potentially fewer routes. But try to ensure that there are at least as many blocks in the pool as there are hosts. + +Reducing the block size from the default (e.g., using `28` for IPv4 to give 16 addresses per block) means more blocks per host and therefore potentially more routes. This can be beneficial if it allows the blocks to be more fairly distributed amongst the hosts. + +### Node Selector[​](#node-selector) + +For details on configuring IP pool node selectors, please read the [Assign IP addresses based on topology guide.](https://docs.tigera.io/calico-enterprise/latest/networking/ipam/assign-ip-addresses-topology). + +#### Selector reference[​](#selector-reference) + +A label selector is an expression which either matches or does not match a resource based on its labels. + +Calico Enterprise label selectors support a number of operators, which can be combined into larger expressions using the boolean operators and parentheses. + +| Expression | Meaning | +| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| **Logical operators** | | +| `( )` | Matches if and only if `` matches. (Parentheses are used for grouping expressions.) | +| `! ` | Matches if and only if `` does not match. **Tip:** `!` is a special character at the start of a YAML string, if you need to use `!` at the start of a YAML string, enclose the string in quotes. | +| ` && ` | "And": matches if and only if both ``, and, `` matches | +| ` \|\| ` | "Or": matches if and only if either ``, or, `` matches. | +| **Match operators** | | +| `all()` | Match all in-scope resources. To match *no* resources, combine this operator with `!` to form `!all()`. | +| `global()` | Match all non-namespaced resources. Useful in a `namespaceSelector` to select global resources such as global network sets. | +| `k == 'v'` | Matches resources with the label 'k' and value 'v'. | +| `k != 'v'` | Matches resources without label 'k' or with label 'k' and value *not* equal to `v` | +| `has(k)` | Matches resources with label 'k', independent of value. To match pods that do not have label `k`, combine this operator with `!` to form `!has(k)` | +| `k in { 'v1', 'v2' }` | Matches resources with label 'k' and value in the given set | +| `k not in { 'v1', 'v2' }` | Matches resources without label 'k' or with label 'k' and value *not* in the given set | +| `k contains 's'` | Matches resources with label 'k' and value containing the substring 's' | +| `k starts with 's'` | Matches resources with label 'k' and value starting with the substring 's' | +| `k ends with 's'` | Matches resources with label 'k' and value ending with the substring 's' | + +Operators have the following precedence: + +- **Highest**: all the match operators +- Parentheses `( ... )` +- Negation with `!` +- Conjunction with `&&` +- **Lowest**: Disjunction with `||` + +For example, the expression + +```text +! has(my-label) || my-label starts with 'prod' && role in {'frontend','business'} +``` + +Would be "bracketed" like this: + +```text +((!(has(my-label)) || ((my-label starts with 'prod') && (role in {'frontend','business'})) +``` + +It would match: + +- Any resource that did not have label "my-label". + +- Any resource that both: + + + + - Has a value for `my-label` that starts with "prod", and, + - Has a role label with value either "frontend", or "business". + +## Supported operations[​](#supported-operations) + +| Datastore type | Create/Delete | Update | Get/List | Notes | +| --------------------- | ------------- | ------ | -------- | ----- | +| Kubernetes API server | Yes | Yes | Yes | | + +## See also[​](#see-also) + +The [`IPReservation` resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/ipreservation) allows for small parts of an IP pool to be reserved so that they will not be used for automatic IPAM assignments. + +### IP reservation + +An IP reservation resource (`IPReservation`) represents a collection of IP addresses that Calico Enterprise should not use when automatically assigning new IP addresses. It only applies when Calico Enterprise IPAM is in use. + +## Sample YAML[​](#sample-yaml) + +```yaml +apiVersion: projectcalico.org/v3 + +kind: IPReservation + +metadata: + + name: my-ipreservation-1 + +spec: + + reservedCIDRs: + + - 192.168.2.3 + + - 10.0.2.3/32 + + - cafe:f00d::/123 +``` + +## IP reservation definition[​](#ip-reservation-definition) + +### Metadata[​](#metadata) + +| Field | Description | Accepted Values | Schema | +| ----- | -------------------------------------------------- | --------------------------------------------------- | ------ | +| name | The name of this IPReservation resource. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | + +### Spec[​](#spec) + +| Field | Description | Accepted Values | Schema | Default | +| ------------- | --------------------------------------------------------------- | -------------------------------------------------- | ------ | ------- | +| reservedCIDRs | List of IP addresses and/or networks specified in CIDR notation | List of valid IP addresses (v4 or v6) and/or CIDRs | list | | + +### Notes[​](#notes) + +The implementation of `IPReservation`s is designed to handle reservation of a small number of IP addresses/CIDRs from (generally much larger) IP pools. If a significant portion of an IP pool is reserved (say more than 10%) then Calico Enterprise may become significantly slower when searching for free IPAM blocks. + +Since `IPReservations` must be consulted for every IPAM assignment request, it's best to have one or two `IPReservation` resources with multiple addresses per `IPReservation` resource (rather than having many IPReservation resources), each with one address inside. + +If an `IPReservation` is created after an IP from its range is already in use then the IP is not automatically released back to the pool. The reservation check is only done at auto allocation time. + +Calico Enterprise supports Kubernetes [annotations that force the use of specific IP addresses](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration#requesting-a-specific-ip-address). These annotations override any `IPReservation`s that are in place. + +When Windows nodes claim blocks of IPs they automatically assign the first three IPs in each block and the final IP for internal purposes. These assignments cannot be blocked by an `IPReservation`. However, if a whole IPAM block is reserved with an `IPReservation`, Windows nodes will not claim such a block. + +### IPAM configuration + +An IPAM configuration resource (`IPAMConfiguration`) represents global IPAM configuration options. + +## Sample YAML[​](#sample-yaml) + +```yaml +apiVersion: projectcalico.org/v3 + +kind: IPAMConfiguration + +metadata: + + name: default + +spec: + + strictAffinity: false + + maxBlocksPerHost: 4 +``` + +## IPAM configuration definition[​](#ipam-configuration-definition) + +### Metadata[​](#metadata) + +| Field | Description | Accepted Values | Schema | +| ----- | --------------------------------------------------------- | --------------- | ------ | +| name | Unique name to describe this resource instance. Required. | default | string | + +The resource is a singleton which must have the name `default`. + +### Spec[​](#spec) + +| Field | Description | Accepted Values | Schema | Default | +| ---------------- | ------------------------------------------------------------------- | --------------- | ------ | ------- | +| strictAffinity | When StrictAffinity is true, borrowing IP addresses is not allowed. | true, false | bool | false | +| maxBlocksPerHost | The max number of blocks that can be affine to each host. | 0 - max(int32) | int | 20 | + +## Supported operations[​](#supported-operations) + +| Datastore type | Create | Delete | Update | Get/List | +| --------------------- | ------ | ------ | ------ | -------- | +| etcdv3 | Yes | Yes | Yes | Yes | +| Kubernetes API server | Yes | Yes | Yes | Yes | + +### License key + +A License Key resource (`LicenseKey`) represents a user's license to use Calico Enterprise. Keys are provided by Tigera support, and must be applied to the cluster to enable Calico Enterprise features. + +For `kubectl` commands, the following case-insensitive aliases may be used to specify the resource type on the CLI: `licensekey.projectcalico.org`, `licensekeys.projectcalico.org` as well as abbreviations such as `licensekey.p` and `licensekeys.p`. + +## Working with license keys[​](#working-with-license-keys) + +### Applying or updating a license key[​](#applying-or-updating-a-license-key) + +When you add Calico Enterprise to an existing Kubernetes cluster or create a new OpenShift cluster, you must apply your license key to complete the installation and gain access to the full set of Calico Enterprise features. + +In deployments that use multicluster management, a license key is required only on the management cluster. + +When your license key expires, you must update it to continue using Calico Enterprise. + +To apply or update a license key use the following command, replacing `` with the customer name in the file sent to you by Tigera. + +**Command** + +```bash +kubectl apply -f -license.yaml +``` + +**Example** + +```bash +kubectl apply -f awesome-corp-license.yaml +``` + +### Viewing information about your license key[​](#viewing-information-about-your-license-key) + +To view the number of licensed nodes and the license key expiry, use: + +```bash +kubectl get licensekeys.p -o custom-columns='Name:.metadata.name,MaxNodes:.status.maxnodes,Expiry:.status.expiry,PackageType:.status.package' +``` + +This is an example of the output of above command. + +```text +Name MaxNodes Expiry Package + +default 100 2021-10-01T23:59:59Z Enterprise +``` + +## Sample YAML[​](#sample-yaml) + +```yaml +apiVersion: projectcalico.org/v3 + +kind: LicenseKey + +metadata: + + creationTimestamp: null + + name: default + +spec: + + certificate: | + + -----BEGIN CERTIFICATE----- + + MII...n5 + + -----END CERTIFICATE----- + + token: eyJ...zaQ + +status: + + expiry: '2021-10-01T23:59:59Z' + + maxnodes: 100 + + package: Enterprise +``` + +The data fields in the license key resource may change without warning. The license key resource is currently a singleton: the only valid name is `default`. + +## Supported operations[​](#supported-operations) + +| Datastore type | Create | Delete | Update | Get/List | Notes | +| --------------------- | ------ | ------ | ------ | -------- | ----- | +| Kubernetes API server | Yes | No | Yes | Yes | | + +### Kubernetes controllers configuration + +A Calico Enterprise [Kubernetes controllers](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/configuration) configuration resource (`KubeControllersConfiguration`) represents configuration options for the Calico Enterprise Kubernetes controllers. + +## Sample YAML[​](#sample-yaml) + +```yaml +apiVersion: projectcalico.org/v3 + +kind: KubeControllersConfiguration + +metadata: + + name: default + +spec: + + logSeverityScreen: Info + + healthChecks: Enabled + + prometheusMetricsPort: 9094 + + controllers: + + node: + + reconcilerPeriod: 5m + + leakGracePeriod: 15m + + syncLabels: Enabled + + hostEndpoint: + + autoCreate: Disabled + + createDefaultHostEndpoint: Enabled + + templates: + + - generateName: custom-host-endpoint + + interfaceCIDRs: + + - 1.2.3.0/24 + + interfacePattern: "eth0|eth1" + + nodeSelector: "has(my-label)" + + labels: + + key: value + + loadbalancer: + + assignIPs: AllServices +``` + +## Kubernetes controllers configuration definition[​](#kubernetes-controllers-configuration-definition) + +### Metadata[​](#metadata) + +| Field | Description | Accepted Values | Schema | +| ----- | --------------------------------------------------------- | ----------------- | ------ | +| name | Unique name to describe this resource instance. Required. | Must be `default` | string | + +- Calico Enterprise automatically creates a resource named `default` containing the configuration settings, only the name `default` is used and only one object of this type is allowed. You can use [calicoctl](https://docs.tigera.io/calico-enterprise/latest/reference/clis/calicoctl/overview) to view and edit these settings + +### Spec[​](#spec) + +| Field | Description | Accepted Values | Schema | Default | +| --------------------- | --------------------------------------------------------- | ----------------------------------- | --------------------------- | ------- | +| logSeverityScreen | The log severity above which logs are sent to the stdout. | Debug, Info, Warning, Error, Fatal | string | Info | +| healthChecks | Enable support for health checks | Enabled, Disabled | string | Enabled | +| prometheusMetricsPort | Port on which to serve prometheus metrics. | Set to 0 to disable, > 0 to enable. | TCP port | 9094 | +| controllers | Enabled controllers and their settings | | [Controllers](#controllers) | | + +### Controllers[​](#controllers) + +| Field | Description | Schema | +| ----------------- | ------------------------------------------------------ | ------------------------------------------------------------------------------- | +| node | Enable and configure the node controller | omit to disable, or [NodeController](#nodecontroller) | +| federatedservices | Enable and configure the federated services controller | omit to disable, or [FederatedServicesController](#federatedservicescontroller) | + +### NodeController[​](#nodecontroller) + +The node controller automatically cleans up configuration for nodes that no longer exist. Optionally, it can create host endpoints for all Kubernetes nodes. + +| Field | Description | Accepted Values | Schema | Default | +| ---------------- | -------------------------------------------------------------------------------------- | ----------------- | ------------------------------------------------------------- | ------- | +| reconcilerPeriod | Period to perform reconciliation with the Calico Enterprise datastore | | [Duration string](https://golang.org/pkg/time/#ParseDuration) | 5m | +| syncLabels | When enabled, Kubernetes node labels will be copied to Calico Enterprise node objects. | Enabled, Disabled | string | Enabled | +| hostEndpoint | Configures the host endpoint controller | | [HostEndpoint](#hostendpoint) | | +| leakGracePeriod | Grace period to use when garbage collecting suspected leaked IP addresses. | | [Duration string](https://golang.org/pkg/time/#ParseDuration) | 15m | + +### HostEndpoint[​](#hostendpoint) + +| Field | Description | Accepted Values | Schema | Default | +| ------------------------- | --------------------------------------------------- | ----------------- | --------------------- | -------- | +| autoCreate | When enabled, automatically create host endpoints | Enabled, Disabled | string | Disabled | +| createDefaultHostEndpoint | When enabled, default host endpoint will be created | Enabled, Disabled | string | Enabled | +| templates | Controls creation of custom host endpoints | | [Template](#template) | | + +### Template[​](#template) + +| Field | Description | Accepted Values | Schema | Default | +| ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------- | ---------------------------------- | ------- | +| generateName | Unique name used as suffix for host endpoints created based on this template | Alphanumeric string | string | | +| nodeSelector | Selects the nodes for which this template should create host endpoints | | [Selector](#selectors) | all() | +| interfaceCIDRs | This configuration defines which IP addresses from a node's specification (including standard, tunnel, and WireGuard IPs) are eligible for inclusion in the generated HostEndpoint. IP addresses must fall within the provided CIDR ranges to be considered. If no address on the node matches the specified CIDRs, the HostEndpoint creation is skipped. | List of valid CIDRs | List string | | +| interfacePattern | Regex to include matching interfaces and their IPs | string | string | | +| labels | Labels to be added to generated host endpoints matching this template | | map of string key to string values | | + +### Selectors[​](#selectors) + +A label selector is an expression which either matches or does not match a resource based on its labels. + +Calico Enterprise label selectors support a number of operators, which can be combined into larger expressions using the boolean operators and parentheses. + +| Expression | Meaning | +| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| **Logical operators** | | +| `( )` | Matches if and only if `` matches. (Parentheses are used for grouping expressions.) | +| `! ` | Matches if and only if `` does not match. **Tip:** `!` is a special character at the start of a YAML string, if you need to use `!` at the start of a YAML string, enclose the string in quotes. | +| ` && ` | "And": matches if and only if both ``, and, `` matches | +| ` \|\| ` | "Or": matches if and only if either ``, or, `` matches. | +| **Match operators** | | +| `all()` | Match all in-scope resources. To match *no* resources, combine this operator with `!` to form `!all()`. | +| `global()` | Match all non-namespaced resources. Useful in a `namespaceSelector` to select global resources such as global network sets. | +| `k == 'v'` | Matches resources with the label 'k' and value 'v'. | +| `k != 'v'` | Matches resources without label 'k' or with label 'k' and value *not* equal to `v` | +| `has(k)` | Matches resources with label 'k', independent of value. To match pods that do not have label `k`, combine this operator with `!` to form `!has(k)` | +| `k in { 'v1', 'v2' }` | Matches resources with label 'k' and value in the given set | +| `k not in { 'v1', 'v2' }` | Matches resources without label 'k' or with label 'k' and value *not* in the given set | +| `k contains 's'` | Matches resources with label 'k' and value containing the substring 's' | +| `k starts with 's'` | Matches resources with label 'k' and value starting with the substring 's' | +| `k ends with 's'` | Matches resources with label 'k' and value ending with the substring 's' | + +Operators have the following precedence: + +- **Highest**: all the match operators +- Parentheses `( ... )` +- Negation with `!` +- Conjunction with `&&` +- **Lowest**: Disjunction with `||` + +For example, the expression + +```text +! has(my-label) || my-label starts with 'prod' && role in {'frontend','business'} +``` + +Would be "bracketed" like this: + +```text +((!(has(my-label)) || ((my-label starts with 'prod') && (role in {'frontend','business'})) +``` + +It would match: + +- Any resource that did not have label "my-label". + +- Any resource that both: + + + + - Has a value for `my-label` that starts with "prod", and, + - Has a role label with value either "frontend", or "business". + +### FederatedServicesController[​](#federatedservicescontroller) + +The federated services controller syncs Kubernetes services from remote clusters defined through [RemoteClusterConfigurations](https://docs.tigera.io/calico-enterprise/latest/reference/resources/remoteclusterconfiguration). + +| Field | Description | Schema | Default | +| ---------------- | --------------------------------------------------------------------- | ------------------------------------------------------------- | ------- | +| reconcilerPeriod | Period to perform reconciliation with the Calico Enterprise datastore | [Duration string](https://golang.org/pkg/time/#ParseDuration) | 5m | + +### LoadBalancerController[​](#loadbalancercontroller) + +The load balancer controller manages IPAM for Services of type LoadBalancer. + +| Field | Description | Accepted Values | Schema | Default | +| --------- | ---------------------------------------------- | ---------------------------------- | ------ | ----------- | +| assignIPs | Mode in which LoadBalancer controller operates | AllServices, RequestedServicesOnly | String | AllServices | + +## Supported operations[​](#supported-operations) + +| Datastore type | Create | Delete (Global `default`) | Update | Get/List | Notes | +| --------------------- | ------ | ------------------------- | ------ | -------- | ----- | +| Kubernetes API server | Yes | Yes | Yes | Yes | | + +### Managed Cluster + +A Managed Cluster resource (`ManagedCluster`) represents a cluster managed by a centralized management plane with a shared Elasticsearch. The management plane provides central control of the managed cluster and stores its logs. + +Calico Enterprise supports connecting multiple Calico Enterprise clusters as describe in the \[Multi-cluster management] installation guide. + +For `kubectl` commands, the following case-insensitive aliases may be used to specify the resource type on the CLI: `managedcluster`,`managedclusters`, `managedcluster.projectcalico.org`, `managedclusters.projectcalico.org` as well as abbreviations such as `managedcluster.p` and `managedclusters.p`. + +## Sample YAML[​](#sample-yaml) + +```yaml +apiVersion: projectcalico.org/v3 + +kind: ManagedCluster + +metadata: + + name: managed-cluster + +spec: + + operatorNamespace: tigera-operator +``` + +## Managed cluster definition[​](#managed-cluster-definition) + +### Metadata[​](#metadata) + +| Field | Description | Accepted Values | Schema | +| ----- | --------------------------------------------------------- | --------------------------------------------------- | ------ | +| name | Unique name to describe this resource instance. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | + +- `cluster` is a reserved name for the management plane and is considered an invalid value + +### Spec[​](#spec) + +| Field | Description | Accepted Values | Schema | Default | +| -------------------- | ----------------------------------------------------------------------------------------------------------------- | --------------- | ------ | ------- | +| installationManifest | Installation Manifest to be applied on a managed cluster infrastructure | None | string | `Empty` | +| operatorNamespace | The namespace of the managed cluster's operator. This value is used in the generation of the InstallationManifest | None | string | `Empty` | -##### `dnsPolicyNfqueueID` +- `installationManifest` field can be retrieved only once at creation time. Updates are not supported for this field. -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `dnsPolicyNfqueueID` | -| Description | The NFQUEUE ID to use for DNS Policy re-evaluation when the domains IP hasn't been programmed to ipsets yet. Used when DNSPolicyMode is DelayDeniedPacket. | -| Schema | Integer | -| Default | `100` | +To extract the installation manifest at creation time `-o jsonpath="{.spec.installationManifest}"` parameters can be used with a `kubectl` command. -##### `dnsPolicyNfqueueSize` +### Status[​](#status) -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `dnsPolicyNfqueueSize` | -| Description | DNSPolicyNfqueueID is the size of the NFQUEUE for DNS policy re-evaluation. This is the maximum number of denied packets that may be queued up pending re-evaluation. Used when DNSPolicyMode is DelayDeniedPacket. | -| Schema | Integer | -| Default | `255` | +Status represents the latest observed status of Managed cluster. The `status` is read-only for users and updated by the Calico Enterprise components. -##### `dnsTrustedServers` +| Field | Description | Schema | +| ---------- | -------------------------------------------------------------------------- | -------------------------------------- | +| conditions | List of condition that describe the current status of the Managed cluster. | List of ManagedClusterStatusConditions | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `dnsTrustedServers` | -| Description | The DNS servers that Felix should trust. Each entry here must be `[:]` - indicating an explicit DNS server IP - or `k8s-service:[/][:port]` - indicating a Kubernetes DNS service. `` defaults to the first service port, or 53 for an IP, and `` to `kube-system`. An IPv6 address with a port must use the square brackets convention, for example `[fd00:83a6::12]:5353`.Note that Felix (calico-node) will need RBAC permission to read the details of each service specified by a `k8s-service:...` form. . | -| Schema | List of strings: `["", ...]`. | -| Default | none | +**ManagedClusterStatusConditions** -#### L7 logs[​](#l7-logs) +Conditions represent the latest observed set of conditions for a Managed cluster. The connection between a management plane and managed plane will be reported as following: -##### `l7LogsFileAggregationDestinationInfo` +- `Unknown` when no initial connection has been established +- `True` when both planes have an established connection +- `False` when neither planes have an established connection -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `l7LogsFileAggregationDestinationInfo` | -| Description | Used to choose the type of aggregation for the destination metadata on L7 log entries. . Accepted values are IncludeL7DestinationInfo and ExcludeL7DestinationInfo. IncludeL7DestinationInfo - Include destination metadata in the logs. ExcludeL7DestinationInfo - Aggregate over all other fields ignoring the destination aggregated name, namespace, and type. | -| Schema | One of: `ExcludeL7DestinationInfo`, `IncludeL7DestinationInfo`. | -| Default | `IncludeL7DestinationInfo` | +| Field | Description | Accepted Values | Schema | Default | +| ------ | ------------------------------------------------------------------------- | -------------------------- | ------ | ------------------------- | +| type | Type of status that is being reported | - | string | `ManagedClusterConnected` | +| status | Status of the connection between a Managed cluster and management cluster | `Unknown`, `True`, `False` | string | `Unknown` | -##### `l7LogsFileAggregationHTTPHeaderInfo` +[Multi-cluster management](https://docs.tigera.io/calico-enterprise/latest/multicluster/set-up-multi-cluster-management/standard-install/create-a-management-cluster) -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `l7LogsFileAggregationHTTPHeaderInfo` | -| Description | Used to choose the type of aggregation for HTTP header data on L7 log entries. . Accepted values are IncludeL7HTTPHeaderInfo and ExcludeL7HTTPHeaderInfo. IncludeL7HTTPHeaderInfo - Include HTTP header data in the logs. ExcludeL7HTTPHeaderInfo - Aggregate over all other fields ignoring the user agent and log type. | -| Schema | One of: `ExcludeL7HTTPHeaderInfo`, `IncludeL7HTTPHeaderInfo`. | -| Default | `ExcludeL7HTTPHeaderInfo` | +### Network policy -##### `l7LogsFileAggregationHTTPMethod` + -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `l7LogsFileAggregationHTTPMethod` | -| Description | Used to choose the type of aggregation for the HTTP request method on L7 log entries. . Accepted values are IncludeL7HTTPMethod and ExcludeL7HTTPMethod. IncludeL7HTTPMethod - Include HTTP method in the logs. ExcludeL7HTTPMethod - Aggregate over all other fields ignoring the HTTP method. | -| Schema | One of: `ExcludeL7HTTPMethod`, `IncludeL7HTTPMethod`. | -| Default | `IncludeL7HTTPMethod` | + -##### `l7LogsFileAggregationNumURLPath` + -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `l7LogsFileAggregationNumURLPath` | -| Description | Used to choose the number of components in the url path to display. This allows for the url to be truncated in case parts of the path provide no value. Setting this value to negative will allow all parts of the path to be displayed. . | -| Schema | Integer | -| Default | `5` | + -##### `l7LogsFileAggregationResponseCode` + -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `l7LogsFileAggregationResponseCode` | -| Description | Used to choose the type of aggregation for the response code on L7 log entries. . Accepted values are IncludeL7ResponseCode and ExcludeL7ResponseCode. IncludeL7ResponseCode - Include the response code in the logs. ExcludeL7ResponseCode - Aggregate over all other fields ignoring the response code. | -| Schema | One of: `ExcludeL7ResponseCode`, `IncludeL7ResponseCode`. | -| Default | `IncludeL7ResponseCode` | + -##### `l7LogsFileAggregationServiceInfo` + -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `l7LogsFileAggregationServiceInfo` | -| Description | Used to choose the type of aggregation for the service data on L7 log entries. . Accepted values are IncludeL7ServiceInfo and ExcludeL7ServiceInfo. IncludeL7ServiceInfo - Include service data in the logs. ExcludeL7ServiceInfo - Aggregate over all other fields ignoring the service name, namespace, and port. | -| Schema | One of: `ExcludeL7ServiceInfo`, `IncludeL7ServiceInfo`. | -| Default | `IncludeL7ServiceInfo` | + + + + +A network policy resource (`NetworkPolicy`) represents an ordered set of rules which are applied to a collection of endpoints that match a [label selector](#selector). + +`NetworkPolicy` is a namespaced resource. `NetworkPolicy` in a specific namespace only applies to [workload endpoint resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint) in that namespace. Two resources are in the same namespace if the `namespace` value is set the same on both. See [global network policy resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy) for non-namespaced network policy. + +`NetworkPolicy` resources can be used to define network connectivity rules between groups of Calico Enterprise endpoints and host endpoints, and take precedence over [profile resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) if any are defined. + +NetworkPolicies are organized into [tiers](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), which provide an additional layer of ordering—in particular, note that the `Pass` action skips to the next [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), to enable hierarchical security policy. + +For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `networkpolicy.projectcalico.org`, `networkpolicies.projectcalico.org` and abbreviations such as `networkpolicy.p` and `networkpolicies.p`. + +## Sample YAML[​](#sample-yaml) + +This sample policy allows TCP traffic from `frontend` endpoints to port 6379 on `database` endpoints. + +```yaml +apiVersion: projectcalico.org/v3 + +kind: NetworkPolicy + +metadata: + + name: internal-access.allow-tcp-6379 + + namespace: production + +spec: + + tier: internal-access + + selector: role == 'database' + + types: + + - Ingress + + - Egress + + ingress: + + - action: Allow + + metadata: + + annotations: + + from: frontend + + to: database + + protocol: TCP + + source: + + selector: role == 'frontend' + + destination: + + ports: + + - 6379 + + egress: + + - action: Allow +``` + +## Definition[​](#definition) + +### Metadata[​](#metadata) + +| Field | Description | Accepted Values | Schema | Default | +| --------- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ | --------- | +| name | The name of the network policy. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | | +| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" | + +### Spec[​](#spec) + +| Field | Description | Accepted Values | Schema | Default | +| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------- | --------------------- | --------------------------------------------- | +| order | Controls the order of precedence. Calico Enterprise applies the policy with the lowest value first. | | float | | +| tier | Name of the [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier) this policy belongs to. | | string | `default` | +| selector | Selects the endpoints to which this policy applies. | | [selector](#selector) | all() | +| types | Applies the policy based on the direction of the traffic. To apply the policy to inbound traffic, set to `Ingress`. To apply the policy to outbound traffic, set to `Egress`. To apply the policy to both, set to `Ingress, Egress`. | `Ingress`, `Egress` | List of strings | Depends on presence of ingress/egress rules\* | +| ingress | Ordered list of ingress rules applied by policy. | | List of [Rule](#rule) | | +| egress | Ordered list of egress rules applied by this policy. | | List of [Rule](#rule) | | +| serviceAccountSelector | Selects the service account(s) to which this policy applies. Select a specific service account by name using the `projectcalico.org/name` label. | | [selector](#selector) | all() | +| performanceHints | Contains a list of hints to Calico's policy engine to help process the policy more efficiently. Hints never change the enforcement behaviour of the policy. The available hints are described [below](#performance-hints). | `AssumeNeededOnEveryNode` | List of strings | | + +\* If `types` has no value, Calico Enterprise defaults as follows. + +> | Ingress Rules Present | Egress Rules Present | `Types` value | +> | --------------------- | -------------------- | ----------------- | +> | No | No | `Ingress` | +> | Yes | No | `Ingress` | +> | No | Yes | `Egress` | +> | Yes | Yes | `Ingress, Egress` | + +### Rule[​](#rule) + +A single rule matches a set of packets and applies some action to them. When multiple rules are specified, they are executed in order. + +| Field | Description | Accepted Values | Schema | Default | +| ----------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------ | ----------------------------- | ------- | +| metadata | Per-rule metadata. | | [RuleMetadata](#rulemetadata) | | +| action | Action to perform when matching this rule. | `Allow`, `Deny`, `Log`, `Pass` | string | | +| protocol | Positive protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | +| notProtocol | Negative protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | +| icmp | ICMP match criteria. | | [ICMP](#icmp) | | +| notICMP | Negative match on ICMP. | | [ICMP](#icmp) | | +| ipVersion | Positive IP version match. | `4`, `6` | integer | | +| source | Source match parameters. | | [EntityRule](#entityrule) | | +| destination | Destination match parameters. | | [EntityRule](#entityrule) | | +| http | Match HTTP request parameters. Application layer policy must be enabled to use this field. | | [HTTPMatch](#httpmatch) | | + +After a `Log` action, processing continues with the next rule; `Allow` and `Deny` are immediate and final and no further rules are processed. + +An `action` of `Pass` in a `NetworkPolicy` or `GlobalNetworkPolicy` will skip over the remaining policies and jump to the first [profile](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) assigned to the endpoint, applying the policy configured in the profile; if there are no Profiles configured for the endpoint the default applied action is `Deny`. + +### RuleMetadata[​](#rulemetadata) + +Metadata associated with a specific rule (rather than the policy as a whole). The contents of the metadata does not affect how a rule is interpreted or enforced; it is simply a way to store additional information for use by operators or applications that interact with Calico Enterprise. + +| Field | Description | Schema | Default | +| ----------- | ----------------------------------- | ----------------------- | ------- | +| annotations | Arbitrary non-identifying metadata. | map of string to string | | + +Example: + +```yaml +metadata: + + annotations: + + app: database + + owner: devops +``` + +Annotations follow the [same rules as Kubernetes for valid syntax and character set](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set). + +On Linux with the iptables data plane, rule annotations are rendered as comments in the form `-m comment --comment "="` on the iptables rule(s) that correspond to the Calico Enterprise rule. + +### ICMP[​](#icmp) + +| Field | Description | Accepted Values | Schema | Default | +| ----- | ------------------- | -------------------- | ------- | ------- | +| type | Match on ICMP type. | Can be integer 0-254 | integer | | +| code | Match on ICMP code. | Can be integer 0-255 | integer | | + +### EntityRule[​](#entityrule) + +Entity rules specify the attributes of the source or destination of a packet that must match for the rule as a whole to match. Packets can be matched on combinations of: + +- Identity of the source/destination, by using [Selectors](#selectors) or by specifying a particular Kubernetes `Service`. Selectors can match [workload endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint), [host endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint) and ([namespaced](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkset) or [global](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset)) network sets. +- Source/destination IP address, protocol and port. + +If the rule contains multiple match criteria (for example, an IP and a port) then all match criteria must match for the rule as a whole to match a packet. + +| Field | Description | Accepted Values | Schema | Default | +| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------------------------------------------- | ------- | +| nets | Match packets with IP in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs | | +| notNets | Negative match on CIDRs. Match packets with IP not in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs | | +| selector | Positive match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | | +| notSelector | Negative match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | | +| namespaceSelector | Positive match on selected namespaces. If specified, only workload endpoints in the selected Kubernetes namespaces are matched. Matches namespaces based on the labels that have been applied to the namespaces. Defines the scope that selectors will apply to, if not defined then selectors apply to the NetworkPolicy's namespace. Match a specific namespace by name using the `projectcalico.org/name` label. Select the non-namespaced resources like GlobalNetworkSet(s), host endpoints to which this policy applies by using `global()` selector. | Valid selector | [selector](#selector) | | +| ports | Positive match on the specified ports | | list of [ports](#ports) | | +| domains | Positive match on [domain names](#exact-and-wildcard-domain-names). | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list of strings | | +| notPorts | Negative match on the specified ports | | list of [ports](#ports) | | +| serviceAccounts | Match endpoints running under service accounts. If a `namespaceSelector` is also defined, the set of service accounts this applies to is limited to the service accounts in the selected namespaces. | | [ServiceAccountMatch](#serviceaccountmatch) | | +| services | Match the specified service(s). If specified on egress rule destinations, no other selection criteria can be set. If specified on ingress rule sources, only positive or negative matches on ports can be specified. | | [ServiceMatch](#servicematch) | | + +> **SECONDARY:** You cannot mix IPv4 and IPv6 CIDRs in a single rule using `nets` or `notNets`. If you need to match both, create 2 rules. + +#### Selector performance in EntityRules[​](#selector-performance-in-entityrules) + +When rendering policy into the data plane, Calico Enterprise must identify the endpoints that match the selectors in all active rules. This calculation is optimized for certain common selector types. Using the optimized selector types reduces CPU usage (and policy rendering time) by orders of magnitude. This becomes important at high scale (hundreds of active rules, hundreds of thousands of endpoints). + +The optimized operators are as follows: + +- `label == "value"` +- `label in { 'v1', 'v2' }` +- `has(label)` +- ` && ` is optimized if **either** `` or `` is optimized. + +The following perform like `has(label)`. All endpoints with the label will be scanned to find matches: + +- `label contains 's'` +- `label starts with 's'` +- `label ends with 's'` + +The other operators, and in particular, `all()`, `!`, `||` and `!=` are not optimized. + +Examples: + +- `a == 'b'` - optimized +- `a == 'b' && has(c)` - optimized +- `a == 'b' || has(c)` - **not** optimized due to use of `||` +- `c != 'd'` - **not** optimized due to use of `!=` +- `!has(a)` - **not** optimized due to use of `!` +- `a == 'b' && c != 'd'` - optimized, `a =='b'` is optimized so `a == 'b' && ` is optimized. +- `c != 'd' && a == 'b'` - optimized, `a =='b'` is optimized so ` && a == 'b'` is optimized. + +### Exact and wildcard domain names[​](#exact-and-wildcard-domain-names) + +The `domains` field is only valid for egress Allow rules. It restricts the rule to apply only to traffic to one of the specified domains. If this field is specified, the parent [Rule](#rule)'s `action` must be `Allow`, and `nets` and `selector` must both be left empty. + +When a configured domain name has no wildcard (`*`), it matches exactly that domain name. For example: + +- `microsoft.com` +- `tigera.io` + +With a single asterisk in any part of the domain name, it matches 1 or more path components at that position. For example: + +- `*.google.com` matches `www.google.com` and `www.ipv6.google.com`, but not `google.com` +- `www.*.com` matches `www.sun.com` and `www.apple.com`, but not `www.com` +- `update.*.mycompany.com` matches `update.tools.mycompany.com`, `update.secure.suite.mycompany.com`, and so on + +**Not** supported are: + +- Multiple wildcards in the same domain, for example: `*.*.mycompany.com` +- Asterisks that are not the entire component, for example: `www.g*.com` +- A wildcard as the last component, for example: `www.mycompany.*` +- More general wildcards, such as regular expressions + +> **SECONDARY:** Calico Enterprise implements policy for domain names by learning the corresponding IPs from DNS, then programming rules to allow those IPs. This means that if multiple domain names A, B and C all map to the same IP, and there is domain-based policy to allow A, traffic to B and C will be allowed as well. + +### Selector[​](#selector) + +A label selector is an expression which either matches or does not match a resource based on its labels. + +Calico Enterprise label selectors support a number of operators, which can be combined into larger expressions using the boolean operators and parentheses. + +| Expression | Meaning | +| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| **Logical operators** | | +| `( )` | Matches if and only if `` matches. (Parentheses are used for grouping expressions.) | +| `! ` | Matches if and only if `` does not match. **Tip:** `!` is a special character at the start of a YAML string, if you need to use `!` at the start of a YAML string, enclose the string in quotes. | +| ` && ` | "And": matches if and only if both ``, and, `` matches | +| ` \|\| ` | "Or": matches if and only if either ``, or, `` matches. | +| **Match operators** | | +| `all()` | Match all in-scope resources. To match *no* resources, combine this operator with `!` to form `!all()`. | +| `global()` | Match all non-namespaced resources. Useful in a `namespaceSelector` to select global resources such as global network sets. | +| `k == 'v'` | Matches resources with the label 'k' and value 'v'. | +| `k != 'v'` | Matches resources without label 'k' or with label 'k' and value *not* equal to `v` | +| `has(k)` | Matches resources with label 'k', independent of value. To match pods that do not have label `k`, combine this operator with `!` to form `!has(k)` | +| `k in { 'v1', 'v2' }` | Matches resources with label 'k' and value in the given set | +| `k not in { 'v1', 'v2' }` | Matches resources without label 'k' or with label 'k' and value *not* in the given set | +| `k contains 's'` | Matches resources with label 'k' and value containing the substring 's' | +| `k starts with 's'` | Matches resources with label 'k' and value starting with the substring 's' | +| `k ends with 's'` | Matches resources with label 'k' and value ending with the substring 's' | + +Operators have the following precedence: + +- **Highest**: all the match operators +- Parentheses `( ... )` +- Negation with `!` +- Conjunction with `&&` +- **Lowest**: Disjunction with `||` + +For example, the expression + +```text +! has(my-label) || my-label starts with 'prod' && role in {'frontend','business'} +``` + +Would be "bracketed" like this: + +```text +((!(has(my-label)) || ((my-label starts with 'prod') && (role in {'frontend','business'})) +``` + +It would match: + +- Any resource that did not have label "my-label". + +- Any resource that both: + + + + - Has a value for `my-label` that starts with "prod", and, + - Has a role label with value either "frontend", or "business". + +Understanding scopes and the `all()` and `global()` operators: selectors have a scope of resources that they are matched against, which depends on the context in which they are used. For example: + +- The `nodeSelector` in an `IPPool` selects over `Node` resources. + +- The top-level selector in a `NetworkPolicy` selects over the workloads *in the same namespace* as the `NetworkPolicy`. + +- The top-level selector in a `GlobalNetworkPolicy` doesn't have the same restriction, it selects over all endpoints including namespaced `WorkloadEndpoint`s and non-namespaced `HostEndpoint`s. + +- The `namespaceSelector` in a `NetworkPolicy` (or `GlobalNetworkPolicy`) *rule* selects over the labels on namespaces rather than workloads. + +- The `namespaceSelector` determines the scope of the accompanying `selector` in the entity rule. If no `namespaceSelector` is present then the rule's `selector` matches the default scope for that type of policy. (This is the same namespace for `NetworkPolicy` and all endpoints/network sets for `GlobalNetworkPolicy`) + +- The `global()` operator can be used (only) in a `namespaceSelector` to change the scope of the main `selector` to include non-namespaced resources such as [GlobalNetworkSet](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset). This allows namespaced `NetworkPolicy` resources to refer to global non-namespaced resources, which would otherwise be impossible. + +### Ports[​](#ports) + +Calico Enterprise supports the following syntaxes for expressing ports. + +| Syntax | Example | Description | +| --------- | ---------- | ------------------------------------------------------------------- | +| int | 80 | The exact (numeric) port specified | +| start:end | 6040:6050 | All (numeric) ports within the range start ≤ x ≤ end | +| string | named-port | A named port, as defined in the ports list of one or more endpoints | + +An individual numeric port may be specified as a YAML/JSON integer. A port range or named port must be represented as a string. For example, this would be a valid list of ports: + +```yaml +ports: [8080, '1234:5678', 'named-port'] +``` + +#### Named ports[​](#named-ports) + +Using a named port in an `EntityRule`, instead of a numeric port, gives a layer of indirection, allowing for the named port to map to different numeric values for each endpoint. + +For example, suppose you have multiple HTTP servers running as workloads; some exposing their HTTP port on port 80 and others on port 8080. In each workload, you could create a named port called `http-port` that maps to the correct local port. Then, in a rule, you could refer to the name `http-port` instead of writing a different rule for each type of server. + +> **SECONDARY:** Since each named port may refer to many endpoints (and Calico Enterprise has to expand a named port into a set of endpoint/port combinations), using a named port is considerably more expensive in terms of CPU than using a simple numeric port. We recommend that they are used sparingly, only where the extra indirection is required. + +### ServiceAccountMatch[​](#serviceaccountmatch) + +A ServiceAccountMatch matches service accounts in an EntityRule. + +| Field | Description | Schema | +| -------- | ------------------------------- | --------------------- | +| names | Match service accounts by name | list of strings | +| selector | Match service accounts by label | [selector](#selector) | + +### ServiceMatch[​](#servicematch) + +A ServiceMatch matches a service in an EntityRule. + +| Field | Description | Schema | +| --------- | ------------------------ | ------ | +| name | The service's name. | string | +| namespace | The service's namespace. | string | + +### Performance Hints[​](#performance-hints) + +Performance hints provide a way to tell Calico Enterprise about the intended use of the policy so that it may process it more efficiently. Currently only one hint is defined: + +- `AssumeNeededOnEveryNode`: normally, Calico Enterprise only calculates a policy's rules and selectors on nodes where the policy is actually in use (i.e. its selector matches a local endpoint). This saves work in most cases. The `AssumeNeededOnEveryNode` hint tells Calico Enterprise to treat the policy as "in use" on *every* node. This is useful for large policy sets that are known to apply to all (or nearly all) endpoints. It effectively "preloads" the policy on every node so that there is less work to do when the first endpoint matching the policy shows up. It also prevents work from being done to tear down the policy when the last endpoint is drained. + +## Application layer policy[​](#application-layer-policy) + +Application layer policy is an optional feature of Calico Enterprise and [must be enabled](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp) to use the following match criteria. + +> **SECONDARY:** Application layer policy match criteria are supported with the following restrictions. +> +> - Only ingress policy is supported. Egress policy must not contain any application layer policy match clauses. +> - Rules must have the action `Allow` if they contain application layer policy match clauses. + +### HTTPMatch[​](#httpmatch) + +An HTTPMatch matches attributes of an HTTP request. The presence of an HTTPMatch clause on a Rule will cause that rule to only match HTTP traffic. Other application layer protocols will not match the rule. + +Example: + +```yaml +http: + + methods: ['GET', 'PUT'] + + paths: + + - exact: '/projects/calico' + + - prefix: '/users' +``` + +| Field | Description | Schema | +| ------- | -------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | +| methods | Match HTTP methods. Case sensitive. [Standard HTTP method descriptions.](https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html) | list of strings | +| paths | Match HTTP paths. Case sensitive. | list of [HTTPPathMatch](#httppathmatch) | + +### HTTPPathMatch[​](#httppathmatch) + +| Syntax | Example | Description | +| ------ | ------------------- | ------------------------------------------------------------------------------- | +| exact | `exact: "/foo/bar"` | Matches the exact path as written, not including the query string or fragments. | +| prefix | `prefix: "/keys"` | Matches any path that begins with the given prefix. | + +## Supported operations[​](#supported-operations) + +| Datastore type | Create/Delete | Update | Get/List | Notes | +| ------------------------ | ------------- | ------ | -------- | ----- | +| Kubernetes API datastore | Yes | Yes | Yes | | + +#### List filtering on tiers[​](#list-filtering-on-tiers) + +List and watch operations may specify label selectors or field selectors to filter `NetworkPolicy` resources on tiers returned by the API server. When no selector is specified, the API server returns all `NetworkPolicy` resources from all tiers that the user has access to. + +##### Field selector[​](#field-selector) + +When using the field selector, supported operators are `=` and `==` + +The following example shows how to retrieve all `NetworkPolicy` resources in the default tier and in all namespaces: + +```bash +kubectl get networkpolicy.p --field-selector spec.tier=default --all-namespaces +``` + +##### Label selector[​](#label-selector) + +When using the label selector, supported operators are `=`, `==` and `IN`. + +The following example shows how to retrieve all `NetworkPolicy` resources in the `default` and `net-sec` tiers and in all namespaces: + +```bash +kubectl get networkpolicy.p -l 'projectcalico.org/tier in (default, net-sec)' --all-namespaces +``` + +### Network set + + -##### `l7LogsFileAggregationSourceInfo` +A network set resource (NetworkSet) represents an arbitrary set of IP subnetworks/CIDRs, allowing it to be matched by Calico Enterprise policy. Network sets are useful for applying policy to traffic coming from (or going to) external, non-Calico Enterprise, networks. -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `l7LogsFileAggregationSourceInfo` | -| Description | L7LogsFileAggregationExcludeSourceInfo is used to choose the type of aggregation for the source metadata on L7 log entries. . Accepted values are IncludeL7SourceInfo, IncludeL7SourceInfoNoPort, and ExcludeL7SourceInfo. IncludeL7SourceInfo - Include source metadata in the logs. IncludeL7SourceInfoNoPort - Include source metadata in the logs excluding the source port. ExcludeL7SourceInfo - Aggregate over all other fields ignoring the source aggregated name, namespace, and type. | -| Schema | One of: `ExcludeL7SourceInfo`, `IncludeL7SourceInfo`, `IncludeL7SourceInfoNoPort`. | -| Default | `IncludeL7SourceInfoNoPort` | +`NetworkSet` is a namespaced resource. `NetworkSets` in a specific namespace only applies to [network policies](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) in that namespace. Two resources are in the same namespace if the `namespace` value is set the same on both. (See [GlobalNetworkSet](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset) for non-namespaced network sets.) -##### `l7LogsFileAggregationTrimURL` +The metadata for each network set includes a set of labels. When Calico Enterprise is calculating the set of IPs that should match a source/destination selector within a [network policy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) rule, it includes the CIDRs from any network sets that match the selector. -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `l7LogsFileAggregationTrimURL` | -| Description | Used to choose the type of aggregation for the url on L7 log entries. . Accepted values: IncludeL7FullURL - Include the full URL up to however many path components are allowed by L7LogsFileAggregationNumURLPath. TrimURLQuery - Aggregate over all other fields ignoring the query parameters on the URL. TrimURLQueryAndPath - Aggregate over all other fields and the base URL only. ExcludeL7URL - Aggregate over all other fields ignoring the URL entirely. | -| Schema | One of: `ExcludeL7URL`, `IncludeL7FullURL`, `TrimURLQuery`, `TrimURLQueryAndPath`. | -| Default | `IncludeL7FullURL` | +> **SECONDARY:** Since Calico Enterprise matches packets based on their source/destination IP addresses, Calico Enterprise rules may not behave as expected if there is NAT between the Calico Enterprise-enabled node and the networks listed in a network set. For example, in Kubernetes, incoming traffic via a service IP is typically SNATed by the kube-proxy before reaching the destination host so Calico Enterprise's workload policy will see the kube-proxy's host's IP as the source instead of the real source. -##### `l7LogsFileAggregationURLCharLimit` +## Sample YAML[​](#sample-yaml) -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `l7LogsFileAggregationURLCharLimit` | -| Description | Limit on the length of the URL collected in L7 logs. When a URL length reaches this limit it is sliced off, and the sliced URL is sent to log storage. | -| Schema | Integer | -| Default | `250` | +```yaml +apiVersion: projectcalico.org/v3 -##### `l7LogsFileDirectory` +kind: NetworkSet -| Attribute | Value | -| ----------- | ------------------------------------------------- | -| Key | `l7LogsFileDirectory` | -| Description | Sets the directory where L7 log files are stored. | -| Schema | String. | -| Default | `/var/log/calico/l7logs` | +metadata: -##### `l7LogsFileEnabled` + name: external-database -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------ | -| Key | `l7LogsFileEnabled` | -| Description | Controls logging L7 logs to a file. If false no L7 logging to file will occur. | -| Schema | Boolean. | -| Default | `true` | + namespace: staging -##### `l7LogsFileMaxFileSizeMB` + labels: -| Attribute | Value | -| ----------- | -------------------------------------------------------- | -| Key | `l7LogsFileMaxFileSizeMB` | -| Description | Sets the max size in MB of L7 log files before rotation. | -| Schema | Integer | -| Default | `100` | + role: db -##### `l7LogsFileMaxFiles` +spec: -| Attribute | Value | -| ----------- | ---------------------------------------- | -| Key | `l7LogsFileMaxFiles` | -| Description | Sets the number of L7 log files to keep. | -| Schema | Integer | -| Default | `5` | + nets: -##### `l7LogsFilePerNodeLimit` + - 198.51.100.0/28 -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `l7LogsFilePerNodeLimit` | -| Description | Limit on the number of L7 logs that can be emitted within each flush interval. When this limit has been reached, Felix counts the number of unloggable L7 responses within the flush interval, and emits a WARNING log with that count at the same time as it flushes the buffered L7 logs. A value of 0 means no limit. | -| Schema | Integer | -| Default | `1500` | + - 203.0.113.0/24 -##### `l7LogsFlushInterval` + allowedEgressDomains: -| Attribute | Value | -| ----------- | ------------------------------------------------------- | -| Key | `l7LogsFlushInterval` | -| Description | Configures the interval at which Felix exports L7 logs. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `5m0s` | + - db.com -#### AWS integration[​](#aws-integration) + - '*.db.com' +``` -##### `awsRequestTimeout` +## Network set definition[​](#network-set-definition) -| Attribute | Value | -| ----------- | ---------------------------------------------------- | -| Key | `awsRequestTimeout` | -| Description | The timeout on AWS API requests. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `30s` | +### Metadata[​](#metadata) -##### `awsSecondaryIPRoutingRulePriority` +| Field | Description | Accepted Values | Schema | Default | +| --------- | ------------------------------------------------------------------ | ------------------------------------------------- | ------ | --------- | +| name | The name of this network set. Required. | Lower-case alphanumeric with optional `_` or `-`. | string | | +| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" | +| labels | A set of labels to apply to this endpoint. | | map | | -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------- | -| Key | `awsSecondaryIPRoutingRulePriority` | -| Description | Controls the priority that Felix will use for routing rules when programming them for AWS Secondary IP support. | -| Schema | Integer: \[0,4294967295] | -| Default | `101` | +### Spec[​](#spec) -##### `awsSecondaryIPSupport` +| Field | Description | Accepted Values | Schema | Default | +| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------- | ------ | ------- | +| nets | The IP networks/CIDRs to include in the set. | Valid IPv4 or IPv6 CIDRs, for example "192.0.2.128/25" | list | | +| allowedEgressDomains | The list of domain names that belong to this set and are honored in egress allow rules only. Domain names specified here only work to allow egress traffic from the cluster to external destinations. They don't work to *deny* traffic to destinations specified by domain name, or to allow ingress traffic from *sources* specified by domain name. | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list | | -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `awsSecondaryIPSupport` | -| Description | Controls whether Felix will try to provision AWS secondary ENIs for workloads that have IPs from IP pools that are configured with an AWS subnet ID. If the field is set to "EnabledENIPerWorkload" then each workload with an AWS-backed IP will be assigned its own secondary ENI. If set to "Enabled" then each workload with an AWS-backed IP pool will be allocated a secondary IP address on a secondary ENI; this mode requires additional IP pools to be provisioned for the host to claim IPs for the primary IP of the secondary ENIs. Accepted value must be one of "Enabled", "EnabledENIPerWorkload" or "Disabled". | -| Schema | One of: `Disabled`, `Enabled`, `EnabledENIPerWorkload`. | -| Default | `Disabled` | +### Exact and wildcard domain names[​](#exact-and-wildcard-domain-names) -##### `awsSrcDstCheck` +When a configured domain name has no wildcard (`*`), it matches exactly that domain name. For example: -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `awsSrcDstCheck` | -| Description | Controls whether Felix will try to change the "source/dest check" setting on the EC2 instance on which it is running. A value of "Disable" will try to disable the source/dest check. Disabling the check allows for sending workload traffic without encapsulation within the same AWS subnet. | -| Schema | One of: `"Disable"`, `"DoNothing"`, `"Enable"`. | -| Default | `DoNothing` | +- `microsoft.com` +- `tigera.io` -#### Egress gateway[​](#egress-gateway) +With a single asterisk in any part of the domain name, it matches 1 or more path components at that position. For example: -##### `egressGatewayPollFailureCount` +- `*.google.com` matches `www.google.com` and `www.ipv6.google.com`, but not `google.com` +- `www.*.com` matches `www.sun.com` and `www.apple.com`, but not `www.com` +- `update.*.mycompany.com` matches `update.tools.mycompany.com`, `update.secure.suite.mycompany.com`, and so on -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------ | -| Key | `egressGatewayPollFailureCount` | -| Description | The minimum number of poll failures before a remote Egress Gateway is considered to have failed. | -| Schema | Integer | -| Default | `3` | +**Not** supported are: -##### `egressGatewayPollInterval` +- Multiple wildcards in the same domain, for example: `*.*.mycompany.com` +- Asterisks that are not the entire component, for example: `www.g*.com` +- A wildcard as the last component, for example: `www.mycompany.*` +- More general wildcards, such as regular expressions -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `egressGatewayPollInterval` | -| Description | The interval at which Felix will poll remote egress gateways to check their health. Only Egress Gateways with a named "health" port will be polled in this way. Egress Gateways that fail the health check will be taken our of use as if they have been deleted. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `10s` | +### Node -##### `egressIPHostIfacePattern` +A node resource (`Node`) represents a node running Calico Enterprise. When adding a host to a Calico Enterprise cluster, a node resource needs to be created which contains the configuration for the `node` instance running on the host. -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `egressIPHostIfacePattern` | -| Description | A comma-separated list of interface names which might send and receive egress traffic across the cluster boundary, after it has left an Egress Gateway pod. Felix will ensure `src_valid_mark` sysctl flags are set correctly for matching interfaces. To target multiple interfaces with a single string, the list supports regular expressions. For regular expressions, wrap the value with `/`. Example: `/^bond/,eth0` will match all interfaces that begin with `bond` and also the interface `eth0`. | -| Schema | String. | -| Default | none | +When starting a `node` instance, the name supplied to the instance should match the name configured in the Node resource. -##### `egressIPRoutingRulePriority` +By default, starting a `node` instance will automatically create a node resource using the `hostname` of the compute host. -| Attribute | Value | -| ----------- | ------------------------------------------------------------------ | -| Key | `egressIPRoutingRulePriority` | -| Description | Controls the priority value to use for the egress IP routing rule. | -| Schema | Integer | -| Default | `100` | +This resource is not supported in `kubectl`. -##### `egressIPSupport` +## Sample YAML[​](#sample-yaml) -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `egressIPSupport` | -| Description | Defines three different support modes for egress IP function. - Disabled: Egress IP function is disabled. - EnabledPerNamespace: Egress IP function is enabled and can be configured on a per-namespace basis; per-pod egress annotations are ignored. - EnabledPerNamespaceOrPerPod: Egress IP function is enabled and can be configured per-namespace or per-pod, with per-pod egress annotations overriding namespace annotations. | -| Schema | One of: `Disabled`, `EnabledPerNamespace`, `EnabledPerNamespaceOrPerPod`. | -| Default | `Disabled` | +```yaml +apiVersion: projectcalico.org/v3 -##### `egressIPVXLANPort` +kind: Node -| Attribute | Value | -| ----------- | ---------------------------------------------------------- | -| Key | `egressIPVXLANPort` | -| Description | The port number of vxlan tunnel device for egress traffic. | -| Schema | Integer | -| Default | `4790` | +metadata: -##### `egressIPVXLANVNI` + name: node-hostname -| Attribute | Value | -| ----------- | ----------------------------------------------------- | -| Key | `egressIPVXLANVNI` | -| Description | The VNI ID of vxlan tunnel device for egress traffic. | -| Schema | Integer | -| Default | `4097` | +spec: -#### External network support[​](#external-network-support) + bgp: -##### `externalNetworkRoutingRulePriority` + asNumber: 64512 -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------- | -| Key | `externalNetworkRoutingRulePriority` | -| Description | Controls the priority value to use for the external network routing rule. | -| Schema | Integer | -| Default | `102` | + ipv4Address: 10.244.0.1/24 -##### `externalNetworkSupport` + ipv6Address: 2001:db8:85a3::8a2e:370:7334/120 -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `externalNetworkSupport` | -| Description | Defines two different support modes for external network function. - Disabled: External network function is disabled. - Enabled: External network function is enabled. | -| Schema | One of: `Disabled`, `Enabled`. | -| Default | `Disabled` | + ipv4IPIPTunnelAddr: 192.168.0.1 +``` -#### Packet capture[​](#packet-capture) +## Definition[​](#definition) -##### `captureDir` +### Metadata[​](#metadata) -| Attribute | Value | -| ----------- | ----------------------------------------- | -| Key | `captureDir` | -| Description | Controls directory to store file capture. | -| Schema | String. | -| Default | `/var/log/calico/pcap` | +| Field | Description | Accepted Values | Schema | +| ----- | -------------------------------- | --------------------------------------------------- | ------ | +| name | The name of this node. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | -##### `captureMaxFiles` +### Spec[​](#spec) -| Attribute | Value | -| ----------- | ------------------------------------------------ | -| Key | `captureMaxFiles` | -| Description | Controls number of rotated capture file to keep. | -| Schema | Integer | -| Default | `2` | +| Field | Description | Accepted Values | Schema | Default | +| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | ---------------------------- | ------- | +| bgp | BGP configuration for this node. Omit if using Calico Enterprise for policy only. | | [BGP](#bgp) | | +| ipv4VXLANTunnelAddr | IPv4 address of the VXLAN tunnel. This is system configured and should not be updated manually. | | string | | +| ipv6VXLANTunnelAddr | IPv6 address of the VXLAN tunnel. This is system configured and should not be updated manually. | | string | | +| vxlanTunnelMACAddr | MAC address of the VXLAN tunnel. This is system configured and should not be updated manually. | | string | | +| vxlanTunnelMACAddrV6 | MAC address of the IPv6 VXLAN tunnel. This is system configured and should not be updated manually. | | string | | +| orchRefs | Correlates this node to a node in another orchestrator. | | list of [OrchRefs](#orchref) | | +| wireguard | WireGuard configuration for this node. This is applicable only if WireGuard is enabled in [Felix Configuration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/felixconfig). | | [WireGuard](#wireguard) | | -##### `captureMaxSizeBytes` +### OrchRef[​](#orchref) -| Attribute | Value | -| ----------- | ---------------------------------------- | -| Key | `captureMaxSizeBytes` | -| Description | Controls the max size of a file capture. | -| Schema | Integer | -| Default | `10000000` | +| Field | Description | Accepted Values | Schema | Default | +| ------------ | ------------------------------------------------ | --------------- | ------ | ------- | +| nodeName | Name of this node according to the orchestrator. | | string | | +| orchestrator | Name of the orchestrator. | k8s | string | | -##### `captureRotationSeconds` +### BGP[​](#bgp) -| Attribute | Value | -| ----------- | ----------------------------------------------- | -| Key | `captureRotationSeconds` | -| Description | Controls the time rotation of a packet capture. | -| Schema | Integer | -| Default | `3600` | +| Field | Description | Accepted Values | Schema | Default | +| ----------------------- | ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | ------- | +| asNumber | The AS Number of your `node`. | Optional. If omitted the global value is used (see [example modifying Global BGP settings](https://docs.tigera.io/calico-enterprise/latest/networking/configuring/bgp) for details about modifying the `asNumber` setting). | integer | | +| ipv4Address | The IPv4 address and subnet exported as the next-hop for the Calico Enterprise endpoints on the host | The IPv4 address must be specified if BGP is enabled. | string | | +| ipv6Address | The IPv6 address and subnet exported as the next-hop for the Calico Enterprise endpoints on the host | Optional | string | | +| ipv4IPIPTunnelAddr | IPv4 address of the IP-in-IP tunnel. This is system configured and should not be updated manually. | Optional IPv4 address | string | | +| routeReflectorClusterID | Enables this node as a route reflector within the given cluster | Optional IPv4 address | string | | -#### L7 proxy[​](#l7-proxy) +### WireGuard[​](#wireguard) -##### `tproxyMode` +| Field | Description | Accepted Values | Schema | Default | +| -------------------- | ----------------------------------------------------------------------------------------- | --------------- | ------ | ------- | +| interfaceIPv4Address | The IP address and subnet for the IPv4 WireGuard interface created by Felix on this node. | Optional | string | | +| interfaceIPv6Address | The IP address and subnet for the IPv6 WireGuard interface created by Felix on this node. | Optional | string | | -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------ | -| Key | `tproxyMode` | -| Description | Sets whether traffic is directed through a transparent proxy for further processing or not and how is the proxying done. | -| Schema | One of: `Disabled`, `Enabled`, `EnabledAllServices`. | -| Default | `Disabled` | +## Supported operations[​](#supported-operations) -##### `tproxyPort` +| Datastore type | Create/Delete | Update | Get/List | Notes | +| --------------------- | ------------- | ------ | -------- | ----------------------------------------------------- | +| Kubernetes API server | No | Yes | Yes | `node` data is directly tied to the Kubernetes nodes. | -| Attribute | Value | -| ----------- | -------------------------------------------------------- | -| Key | `tproxyPort` | -| Description | Sets to which port proxied traffic should be redirected. | -| Schema | Integer | -| Default | `16001` | +### Packet capture -##### `tproxyUpstreamConnMark` + -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------- | -| Key | `tproxyUpstreamConnMark` | -| Description | Tells Felix which mark is used by the proxy for its upstream connections so that Felix can program the dataplane correctly. | -| Schema | Unsigned 32-bit integer. | -| Default | `0x17` | +A Packet Capture resource (`PacketCapture`) represents captured live traffic for debugging microservices and application interaction inside a Kubernetes cluster. -#### Debug/test-only (generally unsupported)[​](#debugtest-only-generally-unsupported) +Calico Enterprise supports selecting one or multiple [WorkloadEndpoints resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint) as described in the [Packet Capture](https://docs.tigera.io/calico-enterprise/latest/observability/packetcapture) guide. -##### `debugDisableLogDropping` +For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `packetcapture`,`packetcaptures`, `packetcapture.projectcalico.org`, `packetcaptures.projectcalico.org` as well as abbreviations such as `packetcapture.p` and `packetcaptures.p`. -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `debugDisableLogDropping` | -| Description | Disables the dropping of log messages when the log buffer is full. This can significantly impact performance if log write-out is a bottleneck. | -| Schema | Boolean. | -| Default | `false` | +## Sample YAML[​](#sample-yaml) -##### `debugHost` +```yaml +apiVersion: projectcalico.org/v3 -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------- | -| Key | `debugHost` | -| Description | The host IP or hostname to bind the debug port to. Only used if DebugPort is set. | -| Schema | String. | -| Default | `localhost` | +kind: PacketCapture -##### `debugMemoryProfilePath` +metadata: -| Attribute | Value | -| ----------- | ----------------------------------------------------------------- | -| Key | `debugMemoryProfilePath` | -| Description | The path to write the memory profile to when triggered by signal. | -| Schema | String. | -| Default | none | + name: sample-capture -##### `debugPort` + namespace: sample-namespace -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `debugPort` | -| Description | If set, enables Felix's debug HTTP port, which allows memory and CPU profiles to be retrieved. The debug port is not secure, it should not be exposed to the internet. | -| Schema | Integer: \[0,65535] | -| Default | none | +spec: -##### `debugSimulateCalcGraphHangAfter` + selector: k8s-app == "sample-app" -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------- | -| Key | `debugSimulateCalcGraphHangAfter` | -| Description | Used to simulate a hang in the calculation graph after the specified duration. This is useful in tests of the watchdog system only! | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `0s` | + filters: -##### `debugSimulateDataplaneApplyDelay` + - protocol: TCP -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `debugSimulateDataplaneApplyDelay` | -| Description | Adds an artificial delay to every dataplane operation. This is useful for simulating a heavily loaded system for test purposes only. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `0s` | + ports: -##### `debugSimulateDataplaneHangAfter` + - 80 +``` -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------- | -| Key | `debugSimulateDataplaneHangAfter` | -| Description | Used to simulate a hang in the dataplane after the specified duration. This is useful in tests of the watchdog system only! | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `0s` | +```yaml +apiVersion: projectcalico.org/v3 -##### `statsDumpFilePath` +kind: PacketCapture -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------- | -| Key | `statsDumpFilePath` | -| Description | The path to write a diagnostic flow logs statistics dump to when triggered by signal. | -| Schema | String. | -| Default | `/var/log/calico/stats/dump` | +metadata: -#### Usage reporting[​](#usage-reporting) + name: sample-capture -##### `usageReportingEnabled` + namespace: sample-namespace -| Attribute | Value | -| ----------- | --------------------------------------------------------------------- | -| Key | `usageReportingEnabled` | -| Description | Unused in Calico Enterprise, usage reporting is permanently disabled. | -| Schema | Boolean. | -| Default | `true` | +spec: -##### `usageReportingInitialDelay` + selector: all() -| Attribute | Value | -| ----------- | --------------------------------------------------------------------- | -| Key | `usageReportingInitialDelay` | -| Description | Unused in Calico Enterprise, usage reporting is permanently disabled. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `5m0s` | + startTime: '2021-08-26T12:00:00Z' -##### `usageReportingInterval` + endTime: '2021-08-26T12:30:00Z' +``` -| Attribute | Value | -| ----------- | --------------------------------------------------------------------- | -| Key | `usageReportingInterval` | -| Description | Unused in Calico Enterprise, usage reporting is permanently disabled. | -| Schema | Duration string, for example `1m30s123ms` or `1h5m`. | -| Default | `24h0m0s` | +## Packet capture definition[​](#packet-capture-definition) -### Health Timeout Overrides[​](#health-timeout-overrides) +### Metadata[​](#metadata) -Felix has internal liveness and readiness watchdog timers that monitor its various loops. If a loop fails to "check in" within the allotted timeout then Felix will report non-Ready or non-Live on its health port (which is monitored by kubelet in a Kubernetes system). If Felix reports non-Live, this can result in the Pod being restarted. +| Field | Description | Accepted Values | Schema | Default | +| --------- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ | --------- | +| name | The name of the packet capture. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | | +| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" | -In Kubernetes, if you see the calico-node Pod readiness or liveness checks fail intermittently, check the calico-node Pod log for a log from Felix that gives the overall health status (the list of components will depend on which features are enabled): +### Spec[​](#spec) -```text -+---------------------------+---------+----------------+-----------------+--------+ +| Field | Description | Accepted Values | Schema | Default | +| --------- | ---------------------------------------------------------------------------------- | ----------------------- | ----------------------- | ------- | +| selector | Selects the endpoints to which this packet capture applies. | | [selector](#selector) | | +| filters | The ordered set of filters applied to traffic captured from an interface. | | [filters](#filters) | | +| startTime | Defines the start time from which this PacketCapture will start capturing packets. | Date in RFC 3339 format | [startTime](#starttime) | | +| endTime | Defines the end time at which this PacketCapture will stop capturing packets. | Date in RFC 3339 format | [endTime](#endtime) | | -| COMPONENT | TIMEOUT | LIVENESS | READINESS | DETAIL | +### Selector[​](#selector) -+---------------------------+---------+----------------+-----------------+--------+ +A label selector is an expression which either matches or does not match a resource based on its labels. -| CalculationGraph | 30s | reporting live | reporting ready | | +Calico Enterprise label selectors support a number of operators, which can be combined into larger expressions using the boolean operators and parentheses. -| FelixStartup | 0s | reporting live | reporting ready | | +| Expression | Meaning | +| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| **Logical operators** | | +| `( )` | Matches if and only if `` matches. (Parentheses are used for grouping expressions.) | +| `! ` | Matches if and only if `` does not match. **Tip:** `!` is a special character at the start of a YAML string, if you need to use `!` at the start of a YAML string, enclose the string in quotes. | +| ` && ` | "And": matches if and only if both ``, and, `` matches | +| ` \|\| ` | "Or": matches if and only if either ``, or, `` matches. | +| **Match operators** | | +| `all()` | Match all in-scope resources. To match *no* resources, combine this operator with `!` to form `!all()`. | +| `global()` | Match all non-namespaced resources. Useful in a `namespaceSelector` to select global resources such as global network sets. | +| `k == 'v'` | Matches resources with the label 'k' and value 'v'. | +| `k != 'v'` | Matches resources without label 'k' or with label 'k' and value *not* equal to `v` | +| `has(k)` | Matches resources with label 'k', independent of value. To match pods that do not have label `k`, combine this operator with `!` to form `!has(k)` | +| `k in { 'v1', 'v2' }` | Matches resources with label 'k' and value in the given set | +| `k not in { 'v1', 'v2' }` | Matches resources without label 'k' or with label 'k' and value *not* in the given set | +| `k contains 's'` | Matches resources with label 'k' and value containing the substring 's' | +| `k starts with 's'` | Matches resources with label 'k' and value starting with the substring 's' | +| `k ends with 's'` | Matches resources with label 'k' and value ending with the substring 's' | -| InternalDataplaneMainLoop | 1m30s | reporting live | reporting ready | | +Operators have the following precedence: -+---------------------------+---------+----------------+-----------------+--------+ +- **Highest**: all the match operators +- Parentheses `( ... )` +- Negation with `!` +- Conjunction with `&&` +- **Lowest**: Disjunction with `||` + +For example, the expression + +```text +! has(my-label) || my-label starts with 'prod' && role in {'frontend','business'} ``` -If some health timeouts show as "timed out" it may help to apply an override using the `healthTimeoutOverrides` field: +Would be "bracketed" like this: -```yaml -... +```text +((!(has(my-label)) || ((my-label starts with 'prod') && (role in {'frontend','business'})) +``` -spec: +It would match: - healthTimeoutOverrides: +- Any resource that did not have label "my-label". - - name: InternalDataplaneMainLoop +- Any resource that both: - timeout: "5m" + - - name: CalculationGraph + - Has a value for `my-label` that starts with "prod", and, + - Has a role label with value either "frontend", or "business". - timeout: "1m30s" +### Filters[​](#filters) - ... -``` +| Field | Description | Accepted Values | Schema | Default | +| -------- | ------------------------------------- | ------------------------------------------------------------ | ----------------- | ------- | +| protocol | Positive protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | +| ports | Positive match on the specified ports | | list of ports | | -A timeout value of 0 disables the timeout. +Calico Enterprise supports the following syntax for expressing ports. -### ProtoPort[​](#protoport) +| Syntax | Example | Description | +| --------- | --------- | ---------------------------------------------------- | +| int | 80 | The exact (numeric) port specified | +| start:end | 6040:6050 | All (numeric) ports within the range start ≤ x ≤ end | -| Field | Description | Accepted Values | Schema | -| -------- | -------------------- | ------------------------------------ | ------ | -| port | The exact port match | 0-65535 | int | -| protocol | The protocol match | tcp, udp, sctp | string | -| net | The CIDR match | any valid CIDR (e.g. 192.168.0.0/16) | string | +An individual numeric port may be specified as a YAML/JSON integer. A port range must be represented as a string. Named ports are not supported by `PacketCapture`. Multiple ports can be defined to filter traffic. All specified ports or port ranges concatenated using the logical operator "OR". -Keep in mind that in the following example, `net: ""` and `net: "0.0.0.0/0"` are processed as the same in the policy enforcement. +For example, this would be a valid list of ports: ```yaml - ... - -spec: +ports: [8080, '1234:5678'] +``` - failsafeInboundHostPorts: +Multiple filter rules can be defined to filter traffic. All rules are concatenated using the logical operator "OR". For example, filtering both TCP or UDP traffic will be defined as: - - net: "192.168.1.1/32" +```yaml +filters: - port: 22 + - protocol: TCP - protocol: tcp + - protocol: UDP +``` - - net: "" +Within a single filter rule, protocol and list of valid ports will be concatenated using the logical operator "AND". - port: 67 +For example, filtering TCP traffic and traffic for port 80 will be defined as: - protocol: udp +```yaml +filters: -failsafeOutboundHostPorts: + - protocol: TCP - - net: "0.0.0.0/0" + ports: [80] +``` - port: 67 +### StartTime[​](#starttime) - protocol: udp +Defines the start time from which this PacketCapture will start capturing packets in RFC 3339 format. If omitted or the value is in the past, the capture will start immediately. If the value is changed to a future time, capture will stop immediately and restart at that time. - ... +```yaml +startTime: '2021-08-26T12:00:00Z' ``` -### AggregationKind[​](#aggregationkind) - -| Value | Description | -| ----- | ---------------------------------------------------------------------------------------- | -| 0 | No aggregation | -| 1 | Aggregate all flows that share a source port on each node | -| 2 | Aggregate all flows that share source ports or are from the same ReplicaSet on each node | - -### DNSPolicyMode[​](#dnspolicymode) +### EndTime[​](#endtime) -| Value | Description | -| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| DelayDeniedPacket (default) | Felix delays any denied packet that traversed a policy that included egress domain matches, but did not match. The packet is released after a fixed time, or after the destination IP address was programmed. | -| DelayDNSResponse | Felix delays any DNS response until related IPSets are programmed. This introduces some latency to all DNS packets (even when no IPSet programming is required), but it ensures policy hit statistics are accurate. This is the recommended setting when you are making use of staged policies or policy rule hit statistics. A Linux kernel version of 3.13 or greater is required to use `DelayDNSResponse`. For earlier kernel versions, this value is modified to `DelayDeniedPacket`. | -| Inline | Parses DNS response inline with DNS response packet processing within iptables. This guarantees the DNS rules reflect any change immediately. This mode works for iptables only and matches the same mode for `BPFDNSPolicyMode`. This setting is ignored on Windows and `NoDelay` is always used. | -| NoDelay | Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs. | +Defines the end time from which this PacketCapture will stop capturing packets in RFC 3339 format. If omitted the capture will continue indefinitely. If the value is changed to the past, capture will stop immediately. -On Windows, or when using the eBPF data plane, this setting is ignored. Windows always uses `NoDelay` while eBPF has its own [BPFDNSPolicyMode](#bpfdnspolicymode) option. +```yaml +endTime: '2021-08-26T12:30:00Z' +``` -### BPFDNSPolicyMode[​](#bpfdnspolicymode) +### Status[​](#status) -| Value | Description | -| ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Inline | Felix does not introduce any delay to any packets. Felix's eBPF programs parse DNS responses and program policy rules immediately, before the DNS response is passed to the application. This only applies to wildcard prefixes: `*.x.y.z` will be processed in this manner, but wildcards such as `x.*.y.z` will not match. A Linux kernel version of 5.17 or greater is required. | -| NoDelay | Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs. | +`PacketCaptureStatus` lists the current state of a `PacketCapture` and its generated capture files. -### RouteTableRange[​](#routetablerange) +| Field | Description | +| ----- | -------------------------------------------------------------------------------------------------------------------------------- | +| files | It describes the location of the packet capture files that is identified via a node, its directory and the file names generated. | -The `RouteTableRange` option is now deprecated in favor of [RouteTableRanges](#routetableranges). +### Files[​](#files) -| Field | Description | Accepted Values | Schema | -| ----- | -------------------- | --------------- | ------ | -| min | Minimum index to use | 1-250 | int | -| max | Maximum index to use | 1-250 | int | +| Field | Description | +| --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| directory | The path inside the calico-node container for the generated files. | +| fileNames | The name of the generated file for a `PacketCapture` ordered alphanumerically. The active packet capture file will be identified using the following schema: `{}.pcap`. Rotated capture files name will contain an index matching the rotation timestamp. | +| node | The hostname of the Kubernetes node the files are located on. | +| state | Determines whether a PacketCapture is capturing traffic from any interface attached to the current node. Possible values include: Capturing, Scheduled, Finished, Error, WaitingForTraffic | -### RouteTableRanges[​](#routetableranges) +### Policy recommendation scope -`RouteTableRanges` is a list of `RouteTableRange` objects: + -| Field | Description | Accepted Values | Schema | -| ----- | -------------------- | --------------- | ------ | -| min | Minimum index to use | 1 - 4294967295 | int | -| max | Maximum index to use | 1 - 4294967295 | int | + -Each item in the `RouteTableRanges` list designates a range of routing tables available to Calico. By default, Calico will use a single range of `1-250`. If a range spans Linux's reserved table range (`253-255`) then those tables are automatically excluded from the list. It's possible that other table ranges may also be reserved by third-party systems unknown to Calico. In that case, multiple ranges can be defined to target tables below and above the sensitive ranges: + -```sh - target tables 65-99, and 256-1000, skipping 100-255 + -calicoctl patch felixconfig default --type=merge -p '{"spec":{"routeTableRanges": [{"min": 65, "max": 99}, {"min": 256, "max": 1000}] }} -``` + -*Note*, for performance reasons, the maximum total number of routing tables that Felix will accept is 65535 (or 2\*16). + -Specifying both the `RouteTableRange` and `RouteTableRanges` arguments is not supported and will result in an error from the api. + -### AWS IAM Role/Policy for source-destination-check configuration[​](#aws-iam-rolepolicy-for-source-destination-check-configuration) +The policy recommendation scope is a collection of configuration options to control [policy recommendation](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/policy-recommendations) in the web console. -Setting `awsSrcDstCheck` to `Disable` will automatically disable source-destination-check on EC2 instances in a cluster, provided necessary IAM roles and policies are set. One of the policies assigned to IAM role of cluster nodes must contain a statement similar to the following: +To apply changes to this resource, use the following format: ```text -{ - - "Effect": "Allow", - - "Action": [ +$ kubectl patch policyrecommendationscope default -p '{"spec":{"":""}}' +``` - "ec2:DescribeInstances", +**Example** - "ec2:ModifyNetworkInterfaceAttribute" +`$ kubectl patch policyrecommendationscope default -p '{"spec":{"interval":"5m"}}'` - ], +## Definition[​](#definition) - "Resource": "*" +### -} -``` +### Metadata[​](#metadata) -If there are no policies attached to node roles containing the above statement, attach a new policy. For example, if a node role is `test-cluster-nodeinstance-role`, click on the IAM role in AWS console. In the `Permission policies` list, add a new inline policy with the above statement to the new policy JSON definition. For detailed information, see [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html?icmpid=docs_iam_console). +| Field | Description | Accepted Values | Schema | Default | +| ----- | -------------------------------------------- | --------------- | ------ | ------- | +| name | The name of the policy recommendation scope. | `default` | string | | -For an EKS cluster, the necessary IAM role and policy is available by default. No further actions are needed. +### Spec[​](#spec) -## Supported operations[​](#supported-operations) +| Field | Description | Accepted Values | Schema | Default | +| ------------------- | ------------------------------------------------------------------------------------------------ | --------------- | ------ | -------------- | +| Interval | The frequency to create and refine policy recommendations. | | | 2.5m (minutes) | +| InitialLookback | Start time to look at flow logs when first creating a policy recommendation. | | | 24h (hours) | +| StabilizationPeriod | Time that a recommended policy should remain unchanged so it is stable and ready to be enforced. | | | 10m (minutes) | -| Datastore type | Create | Delete | Delete (Global `default`) | Update | Get/List | Notes | -| --------------------- | ------ | ------ | ------------------------- | ------ | -------- | ----- | -| Kubernetes API server | Yes | Yes | No | Yes | Yes | | +#### NamespaceSpec[​](#namespacespec) -### Global Alert +| Field | Description | Accepted Values | Schema | Default | +| -------------------------------- | ------------------------------------------------------ | ---------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | +| recStatus | Defines the policy recommendation engine status. | Enabled/Disabled | | Disabled | +| selector | Selects the namespaces for generating recommendations. | | | `!(projectcalico.org/name starts with ''tigera-'') && !(projectcalico.org/name starts with ''calico-'') && !(projectcalico.org/name starts with ''kube-'')` | +| intraNamespacePassThroughTraffic | When true, sets all intra-namespace traffic to Pass | true/false | | false | -A global alert resource represents a query that is periodically run against data sets collected by Calico Enterprise whose findings are added to the Alerts page in the Calico Enterprise web console. Alerts may search for the existence of rows in a query, or when aggregated metrics satisfy a condition. +### Profile -Calico Enterprise supports alerts on the following data sets: +A profile resource (`Profile`) represents a set of rules which are applied to the individual endpoints to which this profile has been assigned. -- [Audit logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/audit-overview) -- [DNS logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/dns/) -- [Flow logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/flow/) -- [L7 logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/l7/) +Each Calico Enterprise endpoint or host endpoint can be assigned to zero or more profiles. -For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases can be used to specify the resource type on the CLI: `globalalert.projectcalico.org`, `globalalerts.projectcalico.org` and abbreviations such as `globalalert.p` and `globalalerts.p`. +This resource is not supported in `kubectl`. ## Sample YAML[​](#sample-yaml) +The following sample profile applies the label `stage: development` to any endpoint that includes `dev-apps` in its list of profiles. + ```yaml apiVersion: projectcalico.org/v3 -kind: GlobalAlert +kind: Profile metadata: - name: sample + name: dev-apps spec: - summary: 'Sample' + labelsToApply: - description: 'Sample ${source_namespace}/${source_name_aggr}' + stage: development +``` - severity: 100 +## Definition[​](#definition) - dataSet: flows +### Metadata[​](#metadata) - query: action=allow +| Field | Description | Accepted Values | Schema | Default | +| ------ | ---------------------------------- | --------------------------------------------------- | ---------------------------------- | ------- | +| name | The name of the profile. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | | +| labels | A set of labels for this profile. | | map of string key to string values | | - aggregateBy: [source_namespace, source_name_aggr] +### Spec[​](#spec) - field: num_flows +| Field | Description | Accepted Values | Schema | Default | +| -------------------- | -------------------------------------------------------------------------------------------------------------- | --------------- | ------------------------------------------------------------------------------------------------------ | ------- | +| ingress (deprecated) | The ingress rules belonging to this profile. | | List of [Rule](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy#rule) | | +| egress (deprecated) | The egress rules belonging to this profile. | | List of [Rule](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy#rule) | | +| labelsToApply | An optional set of labels to apply to each endpoint in this profile (in addition to the endpoint's own labels) | | map | | - metric: sum +For `Rule` details please see the [NetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) or [GlobalNetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy) resource. - condition: gt +## Supported operations[​](#supported-operations) - threshold: 0 -``` +| Datastore type | Create/Delete | Update | Get/List | Notes | +| --------------------- | ------------- | ------ | -------- | ----------------------------------------------------------------------------------- | +| Kubernetes API server | No | No | Yes | Calico Enterprise profiles are pre-assigned for each Namespace and Service Account. | -## GlobalAlert definition[​](#globalalert-definition) +### Remote cluster configuration -### Metadata[​](#metadata) +A remote cluster configuration resource (RemoteClusterConfiguration) represents a cluster in a federation of clusters. Each remote cluster needs a configuration to be specified to allow the local cluster to access resources on the remote cluster. The connection is one-way: the information flows only from the remote to the local cluster. To share information from the local cluster to the remote one a remote cluster configuration resource must be created on the remote cluster. -| Field | Description | Accepted Values | Schema | -| ----- | ----------------------- | ----------------------------------------- | ------ | -| name | The name of this alert. | Lower-case alphanumeric with optional `-` | string | +A remote cluster configuration causes Typha and `calicoq` to retrieve the following resources from a remote cluster: -### Spec[​](#spec) +- [Workload endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint) +- [Host endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint) +- [Profiles](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) (rules are not retrieved from remote profiles, only the `LabelsToApply` field is used) -| Field | Description | Type | Required | Acceptable Values | Default | -| ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- | ----------------------------------------- | ----------------------------- | -------------------------------- | -| type | Type will dictate how the fields of the GlobalAlert will be utilized. Each `type` will have different usages and/or defaults for the other GlobalAlert fields as described in the table. | string | no | RuleBased | RuleBased | -| description | Human-readable description of the template. | string | yes | | | -| summary | Template for the description field in generated events. See the summary section below for more details. `description` is used if this is omitted. | string | no | | | -| severity | Severity of the alert for display in Manager. | int | yes | 1 - 100 | | -| dataSet | Which data set to execute the alert against. | string | if `type` is `RuleBased` | audit, dns, flows, l7 | | -| period | How often the query defined will run, if `type` is `RuleBased`. | duration | no | 1h 2m 3s | 5m, 15m if `type` is `RuleBased` | -| lookback | Specifies how far back in time data is to be collected. Must exceed audit log flush interval, `dnsLogsFlushInterval`, or `flowLogsFlushInterval` as appropriate. | duration | no | 1h 2m 3s | 10m | -| query | Which data to include from the source data set. Written in a domain-specific query language. See the query section below. | string | no | | | -| aggregateBy | An optional list of fields to aggregate results. | string array | no | | | -| field | Which field to aggregate results by if using a metric other than count. | string | if metric is one of avg, max, min, or sum | | | -| metric | A metric to apply to aggregated results. `count` is the number of log entries matching the aggregation pattern. Others are applied only to numeric fields in the logs. | string | no | avg, max, min, sum, count | | -| condition | Compare the value of the metric to the threshold using this condition. | string | if metric defined | eq, not\_eq, lt, lte, gt, gte | | -| threshold | A numeric value to compare the value of the metric against. | float | if metric defined | | | -| substitutions | An optional list of values to replace variable names in query. | List of [GlobalAlertSubstitution](#globalalertsubstitution) | no | | | +When using the Kubernetes API datastore with RBAC enabled on the remote cluster, the RBAC rules must be configured to allow access to these resources. -### GlobalAlertSubstitution[​](#globalalertsubstitution) +For more details on the federation feature refer to the [Overview](https://docs.tigera.io/calico-enterprise/latest/multicluster/federation/overview). -| Field | Description | Type | Required | -| ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ | -------- | -| name | The name of the global alert substitution. It will be referenced by the variable names in query. Duplicate names are not allowed in the substitutions list. | string | yes | -| values | A list of values for this substitution. Wildcard operators asterisk (`*`) and question mark (`?`) are supported. | string array | yes | +For the meaning of the fields matches the configuration used for configuring `calicoctl`, see [Kubernetes datastore](https://docs.tigera.io/calico-enterprise/latest/operations/clis/calicoctl/configure/datastore) instructions for more details. -### Status[​](#status) +This resource is not supported in `kubectl`. -| Field | Description | -| --------------- | ------------------------------------------------------------------------------------------- | -| lastUpdate | When the alert was last modified on the backend. | -| active | Whether the alert is active on the backend. | -| healthy | Whether the alert is in an error state or not. | -| lastExecuted | When the query for the alert last ran. | -| lastEvent | When the condition of the alert was last satisfied and an alert was successfully generated. | -| errorConditions | List of errors preventing operation of the updates or search. | +## Sample YAML[​](#sample-yaml) -## Query[​](#query) +For a remote Kubernetes datastore cluster: -Alerts use a domain-specific query language to select which records from the data set should be used in the alert. This could be used to identify flows with specific features, or to select (or omit) certain namespaces from consideration. +```yaml +apiVersion: projectcalico.org/v3 -The query language is composed of any number of selectors, combined with boolean expressions (`AND`, `OR`, and `NOT`), set expressions (`IN` and `NOTIN`) and bracketed subexpressions. These are translated by Calico Enterprise to Elastic DSL queries that are executed on the backend. +kind: RemoteClusterConfiguration -Set expressions support wildcard operators asterisk (`*`) and question mark (`?`). The asterisk sign matches zero or more characters and the question mark matches a single character. Set values can be embedded into the query string or reference the values in the global alert substitution list. +metadata: -A selector consists of a key, comparator, and value. Keys and values may be identifiers consisting of alphanumerics and underscores (`_`) with the first character being alphabetic or an underscore, or may be quoted strings. Values may also be integer or floating point numbers. Comparators may be `=` (equal), `!=` (not equal), `<` (less than), `<=` (less than or equal), `>` (greater than), or `>=` (greater than or equal). + name: cluster1 -Keys must be indexed fields in their corresponding data set. See the appendix for a list of valid keys in each data set. +spec: -Examples: + datastoreType: kubernetes -- `query: "count > 0"` -- `query: "\"servers.ip\" = \"127.0.0.1\""` + kubeconfig: /etc/tigera-federation-remotecluster/kubeconfig-rem-cluster-1 +``` -Selectors may be combined using `AND`, `OR`, and `NOT` boolean expressions, `IN` and `NOTIN` set expressions, and bracketed subexpressions. +For a remote etcdv3 cluster: -Examples: +```yaml +apiVersion: projectcalico.org/v3 -- `query: "count > 100 AND client_name=mypod"` -- `query: "client_namespace = ns1 OR client_namespace = ns2"` -- `query: "count > 100 AND NOT (client_namespace = ns1 OR client_namespace = ns2)"` -- `query: "(qtype = A OR qtype = AAAA) AND rcode != NoError"` -- `query: "process_name IN {\"proc1?\", \"*proc2\"} AND source_namespace = ns1` -- `query: "qname NOTIN ${domains}"` +kind: RemoteClusterConfiguration -## Aggregation[​](#aggregation) +metadata: -Results from the query can be aggregated by any number of data fields. Only these data fields will be included in the generated alerts, and each unique combination of aggregations will generate a unique alert. Careful consideration of fields for aggregation will yield the best results. + name: cluster1 -Some good choices for aggregations on the `flows` data set are `[source_namespace, source_name_aggr, source_name]`, `[source_ip]`, `[dest_namespace, dest_name_aggr, dest_name]`, and `[dest_ip]` depending on your use case. For the `dns` data set, `[client_namespace, client_name_aggr, client_name]` is a good choice for an aggregation pattern. +spec: -## Metrics and conditions[​](#metrics-and-conditions) + datastoreType: etcdv3 -Results from the query can be further aggregated using a metric that is applied to a numeric field, or counts the number of rows in an aggregation. Search hits satisfying the condition are output as alerts. + etcdEndpoints: 'https://10.0.0.1:2379,https://10.0.0.2:2379' +``` -| Metric | Description | Applied to Field | -| ------ | ---------------------------------- | ---------------- | -| count | Counts the number of rows | No | -| min | The minimal value of the field | Yes | -| max | The maximal value of the field | Yes | -| sum | The sum of all values of the field | Yes | -| avg | The average value of the field | Yes | +## RemoteClusterConfiguration Definition[​](#remoteclusterconfiguration-definition) -| Condition | Description | -| --------- | --------------------- | -| eq | Equals | -| not\_eq | Not equals | -| lt | Less than | -| lte | Less than or equal | -| gt | Greater than | -| gte | Greater than or equal | +### Metadata[​](#metadata) -Example: +| Field | Description | Accepted Values | Schema | +| ----- | ---------------------------------------------- | ----------------------------------------- | ------ | +| name | The name of this remote cluster configuration. | Lower-case alphanumeric with optional `-` | string | -```yaml -apiVersion: projectcalico.org/v3 +### Spec[​](#spec) -kind: GlobalAlert +| Field | Secret key | Description | Accepted Values | Schema | Default | +| ------------------- | ------------- | ------------------------------------------------------------------ | --------------------- | -------------------------- | ------- | +| clusterAccessSecret | | Reference to a Secret that contains connection information | | Kubernetes ObjectReference | none | +| datastoreType | datastoreType | The datastore type of the remote cluster. | `etcdv3` `kubernetes` | string | none | +| etcdEndpoints | etcdEndpoints | A comma separated list of etcd endpoints. | | string | none | +| etcdUsername | etcdUsername | Username for RBAC. | | string | none | +| etcdPassword | etcdPassword | Password for the given username. | | string | none | +| etcdKeyFile | etcdKey | Path to the etcd key file. | | string | none | +| etcdCertFile | etcdCert | Path to the etcd certificate file. | | string | none | +| etcdCACertFile | etcdCACert | Path to the etcd CA certificate file. | | string | none | +| kubeconfig | kubeconfig | Location of the `kubeconfig` file. | | string | none | +| k8sAPIEndpoint | | Location of the kubernetes API server. | | string | none | +| k8sKeyFile | | Location of a client key for accessing the Kubernetes API. | | string | none | +| k8sCertFile | | Location of a client certificate for accessing the Kubernetes API. | | string | none | +| k8sCAFile | | Location of a CA certificate. | | string | none | +| k8sAPIToken | | Token to be used for accessing the Kubernetes API. | | string | none | -metadata: +When using the `clusterAccessSecret` field, all other fields in the RemoteClusterconfiguration resource must be empty. When the `clusterAccessSecret` reference is used, all datastore configuration will be read from the referenced Secret using the "Secret key" fields named in the above table as the data key in the Secret. The fields read from a Secret that were file path or locations in a RemoteClusterConfiguration will be expected to be the file contents when read from a Secret. - name: frequent-dns-responses +All of the fields that start with `etcd` are only valid when the DatastoreType is etcdv3 and the fields that start with `k8s` or `kube` are only valid when the datastore type is kubernetes. The `kubeconfig` field and the fields that end with `File` must be accessible to Typha and `calicoq`, this does not apply when the data is coming from a Secret referenced by `clusterAccessSecret`. -spec: +When the DatastoreType is `kubernetes`, the `kubeconfig` file is optional but since it can contain all of the authentication information needed to access the Kubernetes API server it is generally easier to use than setting all the individual `k8s` fields. The other kubernetes fields can be used by themselves though or to override specific kubeconfig values. - description: 'Monitor for NXDomain' +## Supported operations[​](#supported-operations) - summary: 'Observed ${sum} NXDomain responses for ${qname}' +| Datastore type | Create/Delete | Update | Get/List | Notes | +| --------------------- | ------------- | ------ | -------- | ----- | +| etcdv3 | Yes | Yes | Yes | | +| Kubernetes API server | Yes | Yes | Yes | | - severity: 100 +### Security event webhook - dataSet: dns +A security event webhook (`SecurityEventWebhook`) is a cluster-scoped resource that represents instances of integrations with external systems through the webhook callback mechanism. - query: rcode = NXDomain AND (rtype = A or rtype = AAAA) +For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases can be used to specify the resource type on the CLI: `securityeventwebhook.projectcalico.org`, `securityeventwebhooks.projectcalico.org` and abbreviations such as `securityeventwebhook.p` and `securityeventwebhooks.p`. - aggregateBy: qname +## Sample YAML[​](#sample-yaml) - field: count +```yaml +apiVersion: projectcalico.org/v3 - metric: sum +kind: SecurityEventWebhook - condition: gte +metadata: - threshold: 100 -``` + name: jira-webhook -This alert identifies non-existing DNS responses for Internet addresses that were observed more than 100 times in the past 10 minutes. + annotations: -### Unconditional alerts[​](#unconditional-alerts) + webhooks.projectcalico.org/labels: 'Cluster name:Calico Enterprise' -If the `field`, `metric`, `condition`, and `threshold` fields of an alert are left blank then the alert will trigger whenever its query returns any data. Each hit (or aggregation pattern, if `aggregateBy` is non-empty) returned will cause an event to be created. This should be used **only** when the query is highly specific to avoid filling the Alerts page and index with a large number of events. The use of `aggregateBy` is strongly recommended to reduce the number of entries added to the Alerts page. +spec: -The following example would alert on incoming connections to postgres pods from the Internet that were not denied by policy. It runs hourly to reduce the noise. Noise could be further reduced by removing `source_ip` from the `aggregateBy` clause at the cost of removing `source_ip` from the generated events. + consumer: Jira -```yaml -period: 1h + state: Enabled -lookback: 75m + query: type=waf -query: 'dest_labels="application=postgres" AND source_type=net AND action=allow AND proto=tcp AND dest_port=5432' + config: -aggregateBy: [dest_namespace, dest_name, source_ip] -``` + - name: url -## Summary template[​](#summary-template) + value: 'https://your-jira-instance-name.atlassian.net/rest/api/2/issue/' -Alerts may include a summary template to provide context for the alerts in the Calico Enterprise web console Alert user interface. Any field in the `aggregateBy` section, or the value of the `metric` may be substituted in the summary using a bracketed variable syntax. + - name: project -Example: + value: PRJ -```yaml -summary: 'Observed ${sum} NXDomain responses for ${qname}' -``` + - name: issueType -The `description` field is validated in the same manner. If not provided, the `description` field is used in place of the `summary` field. + value: Bug -## Period and lookback[​](#period-and-lookback) + - name: username -The interval between alerts, and the amount of data considered by the alert may be controlled using the `period` and `lookback` parameters respectively. These fields are formatted as [duration](https://golang.org/pkg/time/#ParseDuration) strings. + valueFrom: -> A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". + secretKeyRef: -The minimum duration of a period is 1 minute with a default of 5 minutes and the default for lookback is 10 minutes. The lookback should always be greater than the sum of the period and the configured `FlowLogsFlushInterval` or `DNSLogsFlushInterval` as appropriate to avoid gaps in coverage. + name: jira-secrets -## Alert records[​](#alert-records) + key: username -With only aggregations and no metrics, the alert will generate one event per aggregation pattern returned by the query. The record field will contain only the aggregated fields. As before, this should be used with specific queries. + - name: apiToken -The addition of a metric will include the value of that metric in the record, along with any aggregations. This, combined with queries as necessary, will yield the best results in most cases. + valueFrom: -With no aggregations the alert will generate one event per record returned by the query. The record will be included in its entirety in the record field of the event. This should only be used with very narrow and specific queries. + secretKeyRef: -## Templates[​](#templates) + name: jira-secrets -Calico Enterprise supports the `GlobalAlertTemplate` resource type. These are used in the Calico Enterprise web console to create alerts with prepopulated fields that can be modified to suit your needs. The `GlobalAlertTemplate` resource is configured identically to the `GlobalAlert` resource. Calico Enterprise includes some sample Alert templates; add your own templates as needed. + key: token +``` -### Sample YAML[​](#sample-yaml-1) +## Security event webhook definition[​](#security-event-webhook-definition) -**RuleBased GlobalAlert** +### Metadata[​](#metadata) -```yaml -apiVersion: projectcalico.org/v3 +| Field | Description | Accepted Values | Schema | +| ----- | --------------------------------------------------------- | --------------------------------------------------- | ------ | +| name | Unique name to describe this resource instance. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | -kind: GlobalAlertTemplate +#### Annotations[​](#annotations) -metadata: +Security event webhooks provide an easy way to add arbitrary data to the webhook generated HTTP payload through the metadata annotation. The value of the `webhooks.projectcalico.org/labels`, if present, will be converted into the payload labels. The value must conform to the following rules: - name: http.connections +- Key and value data for a single label are separated by the `:` character, +- Multiple labels are separated by the `,` character. -spec: +### Spec[​](#spec) - description: 'HTTP connections to a target namespace' +| Field | Description | Accepted Values | Schema | Required | +| -------- | -------------------------------------------------------------------------------------------------------------- | ---------------------------------- | ----------------------------------------------------------------------- | -------- | +| consumer | Specifies intended consumer of the webhook. | Slack, Jira, Alertmanager, Generic | string | yes | +| state | Defines current state of the webhook. | Enabled, Disabled, Debug | string | yes | +| query | Defines query used to retrieve security events from Calico. | [see Query](#query) | string | yes | +| config | Webhook configuration, required contents of this structure is determined by the value of the `consumer` field. | [see Config](#configuration) | list of [SecurityEventWebhookConfigVar](#securityeventwebhookconfigvar) | yes | - summary: 'HTTP connections from ${source_namespace}/${source_name_aggr} to /${dest_name_aggr}' +### SecurityEventWebhookConfigVar[​](#securityeventwebhookconfigvar) - severity: 50 +| Field | Description | Schema | Required | +| --------- | ------------------------------------------------------------------------- | --------------------------------------------------------------------------- | ----------------------------------- | +| name | Configuration variable name. | string | yes | +| value | Direct value for the variable. | string | yes if `valueFrom` is not specified | +| valueFrom | Value defined either in a Kubernetes ConfigMap or in a Kubernetes Secret. | [SecurityEventWebhookConfigVarSource](#securityeventwebhookconfigvarsource) | yes if `value` is not specified | - dataSet: flows +### SecurityEventWebhookConfigVarSource[​](#securityeventwebhookconfigvarsource) - query: dest_namespace="" AND dest_port=80 +| Field | Description | Schema | Required | +| --------------- | ------------------------------- | ------------------------------------------------------------------------------------------------------------ | ----------------------------------------- | +| configMapKeyRef | Kubernetes ConfigMap reference. | `ConfigMapKeySelector` (referenced ConfigMap key should exist in the `tigera-intrusion-detection` namespace) | yes if `secretKeyRef` is not specified | +| secretKeyRef | Kubernetes Secret reference. | `SecretKeySelector` (referenced Secret key should exist in the `tigera-intrusion-detection` namespace) | yes if `configMapKeyRef` is not specified | - aggregateBy: [source_namespace, dest_name_aggr, source_name_aggr] +### Status[​](#status) - field: count +Field `status` reflects the health of a webhook. It is a list of [Kubernetes Conditions](https://pkg.go.dev/k8s.io/apimachinery@v0.23.0/pkg/apis/meta/v1#Condition). - metric: sum +## Query[​](#query) - condition: gte +Security event webhooks use a domain-specific query language to select which records from the data set should trigger the HTTP request. - threshold: 1 -``` +The query language is composed of any number of selectors, combined with boolean expressions (`AND`, `OR`, and `NOT`), set expressions (`IN` and `NOTIN`) and bracketed subexpressions. These are translated by Calico Enterprise to Elastic DSL queries that are executed on the backend. -## Appendix: Valid fields for queries[​](#appendix-valid-fields-for-queries) +Set expressions support wildcard operators asterisk (`*`) and question mark (`?`). The asterisk sign matches zero or more characters and the question mark matches a single character. -### Audit logs[​](#audit-logs) +A selector consists of a key, comparator, and value. Keys and values may be identifiers consisting of alphanumerics and underscores (`_`) with the first character being alphabetic or an underscore, or may be quoted strings. Values may also be integer or floating point numbers. Comparators may be `=` (equal), `!=` (not equal), `<` (less than), `<=` (less than or equal), `>` (greater than), or `>=` (greater than or equal). -See [audit.k8s.io group v1](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/apis/audit/v1/types.go) for descriptions of fields. +## Configuration[​](#configuration) -### DNS logs[​](#dns-logs) +Data required to be present in the `config` section of the security event webhook `spec` depends on the intended consumer for the HTTP requests generated by the webhook. The value in the `consumer` field of the `spec` specifies the consumer and therefore data that is required to be present. Currently Calico supports the following consumers: `Slack`, `Jira`, `Alertmanager` and `Generic`. Payloads generated by the webhook will be different for each of the listed use cases. -See [DNS logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/dns/dns-logs) for description of fields. +### Slack[​](#slack) -### Flow logs[​](#flow-logs) +Data fields required for the `Slack` value present in the `spec.consumer` field of a webhook: -See [Flow logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/flow/datatypes) for description of fields. +| Field | Description | Required | +| ----- | ------------------------------------------------------------------------------- | -------- | +| url | A valid Slack [Incoming Webhook URL](https://api.slack.com/messaging/webhooks). | yes | -### L7 logs[​](#l7-logs) +### Generic[​](#generic) -See [L7 logs](https://docs.tigera.io/calico-enterprise/latest/observability/elastic/l7/datatypes) for description of fields. +Data fields required for the `Generic` value present in the `spec.consumer` field of a webhook: -### Global network policy +| Field | Description | Required | +| ----- | ---------------------------------------------------- | -------- | +| url | A generic and valid URL of another HTTP(s) endpoint. | yes | - +### Jira[​](#jira) - +Data fields required for the `Jira` value present in the `spec.consumer` field of a webhook: + +| Field | Description | Required | +| --------- | ---------------------------------------------------------------------- | -------- | +| url | URL of a Jira REST API v2 endpoint for the organisation. | yes | +| project | A valid Jira project abbreviation. | yes | +| issueType | A valid issue type for the selected project, examples: `Bug` or `Task` | yes | +| username | A valid Jira user name. | yes | +| apiToken | A valid Jira API token for the user. | yes | + +### Alertmanager[​](#alertmanager) + +Data fields required for the `Alertmanager` value present in the `spec.consumer` field of a webhook: + +| Field | Description | Required | +| --------- | --------------------------------------------------------------------------------------- | -------- | +| url | URL of the Alertmanager REST API v2 endpoint for alerts (ending with `/api/v2/alerts`). | yes | +| basicAuth | MD5 checksum of username and password separated by the colon character. | no | +| ca.crt | Certificate authority in PEM format (required for mTLS configuration). | no | +| tls.key | Private key in PEM format (required for mTLS configuration). | no | +| tls.crt | Certificate in PEM format (required for mTLS configuration). | no | + +### Staged global network policy @@ -63956,15 +65709,15 @@ See [L7 logs](https://docs.tigera.io/calico-enterprise/latest/observability/elas -A global network policy resource (`GlobalNetworkPolicy`) represents an ordered set of rules which are applied to a collection of endpoints that match a [label selector](#selector). +A staged global network policy resource (`StagedGlobalNetworkPolicy`) represents an ordered set of rules which are applied to a collection of endpoints that match a [label selector](#selector). These rules are used to preview network behavior and do not enforce network traffic. For enforcing network traffic, see [global network policy resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy). -`GlobalNetworkPolicy` is not a namespaced resource. `GlobalNetworkPolicy` applies to [workload endpoint resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint) in all namespaces, and to [host endpoint resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint). Select a namespace in a `GlobalNetworkPolicy` in the standard selector by using `projectcalico.org/namespace` as the label name and a `namespace` name as the value to compare against, e.g., `projectcalico.org/namespace == "default"`. See [network policy resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) for namespaced network policy. +`StagedGlobalNetworkPolicy` is not a namespaced resource. `StagedGlobalNetworkPolicy` applies to [workload endpoint resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint) in all namespaces, and to [host endpoint resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint). Select a namespace in a `StagedGlobalNetworkPolicy` in the standard selector by using `projectcalico.org/namespace` as the label name and a `namespace` name as the value to compare against, e.g., `projectcalico.org/namespace == "default"`. See [staged network policy resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/stagednetworkpolicy) for staged namespaced network policy. -`GlobalNetworkPolicy` resources can be used to define network connectivity rules between groups of Calico Enterprise endpoints and host endpoints, and take precedence over [Profile resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) if any are defined. +`StagedGlobalNetworkPolicy` resources can be used to define network connectivity rules between groups of Calico Enterprise endpoints and host endpoints, and take precedence over [Profile resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) if any are defined. -GlobalNetworkPolicies are organized into [tiers](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), which provide an additional layer of ordering—in particular, note that the `Pass` action skips to the next [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), to enable hierarchical security policy. +StagedGlobalNetworkPolicies are organized into [tiers](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), which provide an additional layer of ordering—in particular, note that the `Pass` action skips to the next [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), to enable hierarchical security policy. -For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `globalnetworkpolicy.projectcalico.org`, `globalnetworkpolicies.projectcalico.org` and abbreviations such as `globalnetworkpolicy.p` and `globalnetworkpolicies.p`. +For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `stagedglobalnetworkpolicy.projectcalico.org`, `stagedglobalnetworkpolicies.projectcalico.org` and abbreviations such as `stagedglobalnetworkpolicy.p` and `stagedglobalnetworkpolicies.p`. ## Sample YAML[​](#sample-yaml) @@ -63973,7 +65726,7 @@ This sample policy allows TCP traffic from `frontend` endpoints to port 6379 on ```yaml apiVersion: projectcalico.org/v3 -kind: GlobalNetworkPolicy +kind: StagedGlobalNetworkPolicy metadata: @@ -63995,14 +65748,6 @@ spec: - action: Allow - metadata: - - annotations: - - from: frontend - - to: database - protocol: TCP source: @@ -64060,7 +65805,7 @@ Only one of `doNotTrack` and `preDNAT` may be set to `true` (in a given policy). `applyOnForward` must be set to `true` if either `doNotTrack` or `preDNAT` is `true` because for a given policy, any untracked rules or rules before DNAT will in practice apply to forwarded traffic. -See [Policy for hosts](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/) for how `doNotTrack` and `preDNAT` and `applyOnForward` can be useful for host endpoints. +See [Using Calico Enterprise to Secure Host Interfaces](https://docs.tigera.io/calico-enterprise/latest/reference/host-endpoints/) for how `doNotTrack` and `preDNAT` and `applyOnForward` can be useful for host endpoints. ### Rule[​](#rule) @@ -64247,20 +65992,6 @@ It would match: - Has a value for `my-label` that starts with "prod", and, - Has a role label with value either "frontend", or "business". -Understanding scopes and the `all()` and `global()` operators: selectors have a scope of resources that they are matched against, which depends on the context in which they are used. For example: - -- The `nodeSelector` in an `IPPool` selects over `Node` resources. - -- The top-level selector in a `NetworkPolicy` selects over the workloads *in the same namespace* as the `NetworkPolicy`. - -- The top-level selector in a `GlobalNetworkPolicy` doesn't have the same restriction, it selects over all endpoints including namespaced `WorkloadEndpoint`s and non-namespaced `HostEndpoint`s. - -- The `namespaceSelector` in a `NetworkPolicy` (or `GlobalNetworkPolicy`) *rule* selects over the labels on namespaces rather than workloads. - -- The `namespaceSelector` determines the scope of the accompanying `selector` in the entity rule. If no `namespaceSelector` is present then the rule's `selector` matches the default scope for that type of policy. (This is the same namespace for `NetworkPolicy` and all endpoints/network sets for `GlobalNetworkPolicy`) - -- The `global()` operator can be used (only) in a `namespaceSelector` to change the scope of the main `selector` to include non-namespaced resources such as [GlobalNetworkSet](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset). This allows namespaced `NetworkPolicy` resources to refer to global non-namespaced resources, which would otherwise be impossible. - ### Ports[​](#ports) Calico Enterprise supports the following syntaxes for expressing ports. @@ -64309,72 +66040,6 @@ Performance hints provide a way to tell Calico Enterprise about the intended use - `AssumeNeededOnEveryNode`: normally, Calico Enterprise only calculates a policy's rules and selectors on nodes where the policy is actually in use (i.e. its selector matches a local endpoint). This saves work in most cases. The `AssumeNeededOnEveryNode` hint tells Calico Enterprise to treat the policy as "in use" on *every* node. This is useful for large policy sets that are known to apply to all (or nearly all) endpoints. It effectively "preloads" the policy on every node so that there is less work to do when the first endpoint matching the policy shows up. It also prevents work from being done to tear down the policy when the last endpoint is drained. -## Application layer policy[​](#application-layer-policy) - -Application layer policy is an optional feature of Calico Enterprise and [must be enabled](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp) to use the following match criteria. - -> **SECONDARY:** Application layer policy match criteria are supported with the following restrictions. -> -> - Only ingress policy is supported. Egress policy must not contain any application layer policy match clauses. -> - Rules must have the action `Allow` if they contain application layer policy match clauses. - -### HTTPMatch[​](#httpmatch) - -An HTTPMatch matches attributes of an HTTP request. The presence of an HTTPMatch clause on a Rule will cause that rule to only match HTTP traffic. Other application layer protocols will not match the rule. - -Example: - -```yaml -http: - - methods: ['GET', 'PUT'] - - paths: - - - exact: '/projects/calico' - - - prefix: '/users' - - headers: - - - header: 'x-forwarded-for' - - operator: 'HasPrefix' - - values: ['192.168.0.1', '192.168.0.254'] -``` - -| Field | Description | Schema | -| ------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------- | -| methods | Match HTTP methods. Case sensitive. [Standard HTTP method descriptions.](https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html) | list of strings | -| paths | Match HTTP paths. Case sensitive. | list of [HTTPPathMatch](#httppathmatch) | -| headers | Match HTTP headers. | list of [HTTPHeaderMatch](#httpheadermatch) | - -### HTTPPathMatch[​](#httppathmatch) - -| Syntax | Example | Description | -| ------ | ------------------- | ------------------------------------------------------------------------------- | -| exact | `exact: "/foo/bar"` | Matches the exact path as written, not including the query string or fragments. | -| prefix | `prefix: "/keys"` | Matches any path that begins with the given prefix. | - -### HTTPHeaderMatch[​](#httpheadermatch) - -| Syntax | Example | Description | -| -------- | ---------------------------------- | ------------------------------------------------------------------------------------------ | -| header | `x-forwarded-for` | Name of a HTTP header. Header names are case insensitive, please use lowercase characters. | -| operator | `In` | Operator name to apply to the HTTP header value. Case sensitive. | -| values | `['192.168.0.1', '192.168.0.254']` | Values that the operator will test the HTTP header value against. Case sensitive. | - -The following operators are allowed: - -- `Exists`: matches the HTTP request header if the specified header exists. `values` are ignored. -- `DoesNotExist`: matches the HTTP request header if the specified header does not exist. `values` are ignored. -- `HasPrefix`: matches the HTTP request header if the specified header has a prefix from any the values provided. -- `HasSuffix`: matches the HTTP request header if the specified header has a suffix from any the values provided. -- `In`: matches the HTTP request header if its value is in the set of values provided. -- `NotIn`: matches the HTTP request header if its value is not in the set of values provided. -- `MatchesRegex`: matches the HTTP request header if its value matches any of the regular expressions in values field. - ## Supported operations[​](#supported-operations) | Datastore type | Create/Delete | Update | Get/List | Notes | @@ -64383,768 +66048,347 @@ The following operators are allowed: #### List filtering on tiers[​](#list-filtering-on-tiers) -List and watch operations may specify label selectors or field selectors to filter `GlobalNetworkPolicy` resources on tiers returned by the API server. When no selector is specified, the API server returns all `GlobalNetworkPolicy` resources from all tiers that the user has access to. +List and watch operations may specify label selectors or field selectors to filter `StagedGlobalNetworkPolicy` resources on tiers returned by the API server. When no selector is specified, the API server returns all `StagedGlobalNetworkPolicy` resources from all tiers that the user has access to. ##### Field selector[​](#field-selector) When using the field selector, supported operators are `=` and `==` -The following example shows how to retrieve all `GlobalNetworkPolicy` resources in the default tier: +The following example shows how to retrieve all `StagedGlobalNetworkPolicy` resources in the default tier: ```bash -kubectl get globalnetworkpolicy --field-selector spec.tier=default +kubectl get stagedglobalnetworkpolicy --field-selector spec.tier=default ``` ##### Label selector[​](#label-selector) When using the label selector, supported operators are `=`, `==` and `IN`. -The following example shows how to retrieve all `GlobalNetworkPolicy` resources in the `default` and `net-sec` tiers: +The following example shows how to retrieve all `StagedGlobalNetworkPolicy` resources in the `default` and `net-sec` tiers: ```bash -kubectl get globalnetworkpolicy -l 'projectcalico.org/tier in (default, net-sec)' -``` - -### Global network set - - - -A global network set resource (GlobalNetworkSet) represents an arbitrary set of IP subnetworks/CIDRs, allowing it to be matched by Calico Enterprise policy. Network sets are useful for applying policy to traffic coming from (or going to) external, non-Calico Enterprise, networks. - -GlobalNetworkSets can also include domain names, whose effect is to allow egress traffic to those domain names, when the GlobalNetworkSet is matched by the destination selector of an egress rule with action Allow. Domain names have no effect in ingress rules, or in a rule whose action is not Allow. - -> **SECONDARY:** Calico Enterprise implements policy for domain names by learning the corresponding IPs from DNS, then programming rules to allow those IPs. This means that if multiple domain names A, B and C all map to the same IP, and there is domain-based policy to allow A, traffic to B and C will be allowed as well. - -The metadata for each network set includes a set of labels. When Calico Enterprise is calculating the set of IPs that should match a source/destination selector within a [global network policy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy) rule, or within a [network policy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) rule whose `namespaceSelector` includes `global()`, it includes the CIDRs from any network sets that match the selector. - -> **SECONDARY:** Since Calico Enterprise matches packets based on their source/destination IP addresses, Calico Enterprise rules may not behave as expected if there is NAT between the Calico Enterprise-enabled node and the networks listed in a network set. For example, in Kubernetes, incoming traffic via a service IP is typically SNATed by the kube-proxy before reaching the destination host so Calico Enterprise's workload policy will see the kube-proxy's host's IP as the source instead of the real source. For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `globalnetworkset.projectcalico.org`, `globalnetworksets.projectcalico.org` and abbreviations such as `globalnetworkset.p` and `globalnetworksets.p`. - -## Sample YAML[​](#sample-yaml) - -```yaml -apiVersion: projectcalico.org/v3 - -kind: GlobalNetworkSet - -metadata: - - name: a-name-for-the-set - - labels: - - role: external-database - -spec: - - nets: - - - 198.51.100.0/28 - - - 203.0.113.0/24 - - allowedEgressDomains: - - - db.com - - - '*.db.com' +kubectl get stagedglobalnetworkpolicy -l 'projectcalico.org/tier in (default, net-sec)' ``` -## Global network set definition[​](#global-network-set-definition) - -### Metadata[​](#metadata) - -| Field | Description | Accepted Values | Schema | -| ------ | ------------------------------------------ | ------------------------------------------------- | ------ | -| name | The name of this network set. | Lower-case alphanumeric with optional `-` or `-`. | string | -| labels | A set of labels to apply to this endpoint. | | map | - -### Spec[​](#spec) - -| Field | Description | Accepted Values | Schema | Default | -| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------- | ------ | ------- | -| nets | The IP networks/CIDRs to include in the set. | Valid IPv4 or IPv6 CIDRs, for example "192.0.2.128/25" | list | | -| allowedEgressDomains | The list of domain names that belong to this set and are honored in egress allow rules only. Domain names specified here only work to allow egress traffic from the cluster to external destinations. They don't work to *deny* traffic to destinations specified by domain name, or to allow ingress traffic from *sources* specified by domain name. | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list | | - -### Exact and wildcard domain names[​](#exact-and-wildcard-domain-names) - -When a configured domain name has no wildcard (`*`), it matches exactly that domain name. For example: - -- `microsoft.com` -- `tigera.io` - -With a single asterisk in any part of the domain name, it matches 1 or more path components at that position. For example: - -- `*.google.com` matches `www.google.com` and `www.ipv6.google.com`, but not `google.com` -- `www.*.com` matches `www.sun.com` and `www.apple.com`, but not `www.com` -- `update.*.mycompany.com` matches `update.tools.mycompany.com`, `update.secure.suite.mycompany.com`, and so on - -**Not** supported are: - -- Multiple wildcards in the same domain, for example: `*.*.mycompany.com` -- Asterisks that are not the entire component, for example: `www.g*.com` -- A wildcard as the last component, for example: `www.mycompany.*` -- More general wildcards, such as regular expressions - -### Global report - -A global report resource is a configuration for generating compliance reports. A global report configuration in Calico Enterprise lets you: +### Staged Kubernetes network policy -- Specify report contents, frequency, and data filtering -- Specify the node(s) on which to run the report generation jobs -- Enable/disable creation of new jobs for generating the report +A staged kubernetes network policy resource (`StagedKubernetesNetworkPolicy`) represents a staged version of [Kubernetes network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies). This is used to preview network behavior before actually enforcing the network policy. Once persisted, this will create a Kubernetes network policy backed by a Calico Enterprise [network policy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy). -For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `globalreport.projectcalico.org`, `globalreports.projectcalico.org` and abbreviations such as `globalreport.p` and `globalreports.p`. +For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `stagedkubernetesnetworkpolicy.projectcalico.org`, `stagedkubernetesnetworkpolicies.projectcalico.org` and abbreviations such as `stagedkubernetesnetworkpolicy.p` and `stagedkubernetesnetworkpolicies.p`. ## Sample YAML[​](#sample-yaml) -```yaml -apiVersion: projectcalico.org/v3 - -kind: GlobalReport - -metadata: - - name: weekly-full-inventory - -spec: - - reportType: inventory - - schedule: 0 0 * * 0 - - jobNodeSelector: - - nodetype: infrastructure - ---- - -apiVersion: projectcalico.org/v3 - -kind: GlobalReport - -metadata: - - name: hourly-accounts-networkaccess - -spec: - - reportType: network-access - - endpoints: - - namespaces: - - names: ['payable', 'collections', 'payroll'] - - schedule: 0 * * * * - ---- - -apiVersion: projectcalico.org/v3 - -kind: GlobalReport - -metadata: - - name: monthly-widgets-controller-tigera-policy-audit - -spec: - - reportType: policy-audit - - schedule: 0 0 1 * * - - endpoints: - - serviceAccounts: - - names: ['controller'] - - namespaces: - - names: ['widgets'] - ---- - -apiVersion: projectcalico.org/v3 - -kind: GlobalReport - -metadata: - - name: daily-cis-benchmark - -spec: - - reportType: cis-benchmark - - schedule: 0 0 * * * - - cis: - - resultsFilters: - - - benchmarkSelection: { kubernetesVersion: '1.13' } - - exclude: ['1.1.4', '1.2.5'] -``` - -## GlobalReport Definition[​](#globalreport-definition) - -### Metadata[​](#metadata) - -| Field | Description | Accepted Values | Schema | -| ------ | ---------------------------------------- | ------------------------------------------------ | ------ | -| name | The name of this report. | Lower-case alphanumeric with optional `-` or `.` | string | -| labels | A set of labels to apply to this report. | | map | - -### Spec[​](#spec) - -| Field | Description | Required | Accepted Values | Schema | -| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | -| reportType | The type of report to produce. This field controls the content of the report - see the links for each type for more details. | Yes | [cis‑benchmark](https://docs.tigera.io/calico-enterprise/latest/reference/resources/compliance-reports/cis-benchmark), [inventory](https://docs.tigera.io/calico-enterprise/latest/reference/resources/compliance-reports/inventory), [network‑access](https://docs.tigera.io/calico-enterprise/latest/reference/resources/compliance-reports/network-access), [policy‑audit](https://docs.tigera.io/calico-enterprise/latest/reference/resources/compliance-reports/policy-audit) | string | -| endpoints | Specify which endpoints are in scope. If omitted, selects everything. | | | [EndpointsSelection](#endpointsselection) | -| schedule | Configure report frequency by specifying start and end time in [cron-format](https://en.wikipedia.org/wiki/Cron). Reports are started 30 minutes (configurable) after the scheduled value to allow enough time for data archival. A maximum limit of 12 schedules per hour is enforced (an average of one report every 5 minutes). | Yes | | string | -| jobNodeSelector | Specify the node(s) for scheduling the report jobs using selectors. | | | map | -| suspend | Disable future scheduled report jobs. In-flight reports are not affected. | | | bool | -| cis | Parameters related to generating a CIS benchmark report. | | | [CISBenchmarkParams](#cisbenchmarkparams) | - -### EndpointsSelection[​](#endpointsselection) - -| Field | Description | Schema | -| --------------- | ------------------------------------------------------------------------------------------- | ------------------------------------------- | -| selector | Endpoint label selector to restrict endpoint selection. | string | -| namespaces | Namespace name and label selector to restrict endpoints by selected namespaces. | [NamesAndLabelsMatch](#namesandlabelsmatch) | -| serviceAccounts | Service account name and label selector to restrict endpoints by selected service accounts. | [NamesAndLabelsMatch](#namesandlabelsmatch) | - -### CISBenchmarkParams[​](#cisbenchmarkparams) - -| Fields | Description | Required | Schema | -| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------------------------------- | -| highThreshold | Integer percentage value that determines the lower limit of passing tests to consider a node as healthy. Default: 100 | No | int | -| medThreshold | Integer percentage value that determines the lower limit of passing tests to consider a node as unhealthy. Default: 50 | No | int | -| includeUnscoredTests | Boolean value that when false, applies a filter to exclude tests that are marked as “Unscored” by the CIS benchmark standard. If true, the tests will be included in the report. Default: false | No | bool | -| numFailedTests | Integer value that sets the number of tests to display in the Top-failed Tests section of the CIS benchmark report. Default: 5 | No | int | -| resultsFilters | Specifies an include or exclude filter to apply on the test results that will appear on the report. | No | [CISBenchmarkFilter](#cisbenchmarkfilter) | - -### CISBenchmarkFilter[​](#cisbenchmarkfilter) - -| Fields | Description | Required | Schema | -| ------------------ | ---------------------------------------------------------------------------------------------- | -------- | ----------------------------------------------- | -| benchmarkSelection | Specify which set of benchmarks that this filter should apply to. Selects all benchmark types. | No | [CISBenchmarkSelection](#cisbenchmarkselection) | -| exclude | Specify which benchmark tests to exclude | No | array of strings | -| include | Specify which benchmark tests to include only (higher precedence than exclude) | No | array of strings | - -### CISBenchmarkSelection[​](#cisbenchmarkselection) - -| Fields | Description | Required | Schema | -| ----------------- | -------------------------------------- | -------- | ------ | -| kubernetesVersion | Specifies a version of the benchmarks. | Yes | string | - -### NamesAndLabelsMatch[​](#namesandlabelsmatch) - -| Field | Description | Schema | -| -------- | ------------------------------------ | ------ | -| names | Set of resource names. | list | -| selector | Selects a set of resources by label. | string | - -Use the `NamesAndLabelsMatch`to limit the scope of endpoints. If both `names` and `selector` are specified, the resource is identified using label *AND* name match. - -> **SECONDARY:** To use the Calico Enterprise compliance reporting feature, you must ensure all required resource types are being audited and the logs archived in Elasticsearch. You must explicitly configure the [Kubernetes API Server](https://docs.tigera.io/calico-enterprise/latest/observability/kube-audit) to send audit logs for Kubernetes-owned resources to Elasticsearch. - -## Supported operations[​](#supported-operations) - -| Datastore type | Create/Delete | Update | Get/List | Notes | -| --------------------- | ------------- | ------ | -------- | ----- | -| Kubernetes API server | Yes | Yes | Yes | | - -### Global threat feed - -A global threat feed resource (GlobalThreatFeed) represents a feed of threat intelligence used for security purposes. - -Calico Enterprise supports threat feeds that give either - -- a set of IP addresses or IP prefixes, with content type IPSet, or -- a set of domain names, with content type DomainNameSet - -For each IPSet threat feed, Calico Enterprise automatically monitors flow logs for members of the set. IPSet threat feeds can also be configured to be synchronized to a [global network set](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset), allowing you to use them as a dynamically-updating deny-list by incorporating the global network set into network policy. - -For each DomainNameSet threat feed, Calico Enterprise automatically monitors DNS logs for queries (QNAME) or answers (RR NAME or RDATA) that contain members of the set. - -For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `globalthreatfeed.projectcalico.org`, `globalthreatfeeds.projectcalico.org` and abbreviations such as `globalthreatfeed.p` and `globalthreatfeeds.p`. - -## Sample YAML[​](#sample-yaml) +Below is a sample policy created from the example policy from the [Kubernetes NetworkPolicy documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource). The only difference between this policy and the example Kubernetes version is that the `apiVersion` and `kind` are changed to properly specify a staged Kubernetes network policy. ```yaml apiVersion: projectcalico.org/v3 -kind: GlobalThreatFeed - -metadata: - - name: sample-global-threat-feed - -spec: - - content: IPSet - - mode: Enabled - - description: "This is the sample global threat feed" - - feedType: Custom - - globalNetworkSet: - - # labels to set on the GNS - - labels: - - level: high - - pull: - - # accepts time in golang duration format - - period: 24h - - http: - - format: - - newlineDelimited: {} - - url: https://an.example.threat.feed/deny-list - - headers: - - - name: "Accept" - - value: "text/plain" - - - name: "APIKey" - - valueFrom: - - # secrets selected must be in the "tigera-intrusion-detection" namespace to be used - - secretKeyRef: - - name: "globalthreatfeed-sample-global-threat-feed-example" - - key: "apikey" -``` - -## Push or Pull[​](#push-or-pull) - -You can configure Calico Enterprise to pull updates from your threat feed using a [`pull`](#pull) stanza in the global threat feed spec. - -Alternately, you can have your threat feed push updates directly. Leave out the `pull` stanza, and configure your threat feed to create or update the Elasticsearch document that corresponds to the global threat feed object. - -For IPSet threat feeds, this Elasticsearch document will be in the index `.tigera.ipset.` and must have the ID set to the name of the global threat feed object. The doc should have a single field called `ips`, containing a list of IP prefixes. - -For example: - -```text -PUT .tigera.ipset.cluster01/_doc/sample-global-threat-feed - -{ - - "ips" : ["99.99.99.99/32", "100.100.100.0/24"] - -} -``` - -For DomainNameSet threat feeds, this Elasticsearch document will be in the index `.tigera.domainnameset.` and must have the ID set to the name of the global threat feed object. The doc should have a single field called `domains`, containing a list of domain names. - -For example: - -```text -PUT .tigera.domainnameset.cluster01/_doc/example-global-threat-feed - -{ - - "domains" : ["malware.badstuff", "hackers.r.us"] - -} -``` - -Refer to the [Elasticsearch document APIs](https://www.elastic.co/guide/en/elasticsearch/reference/6.4/docs-update.html) for more information on how to create and update documents in Elasticsearch. - -## GlobalThreatFeed Definition[​](#globalthreatfeed-definition) - -### Metadata[​](#metadata) - -| Field | Description | Accepted Values | Schema | -| ------ | --------------------------------------------- | ----------------------------------------- | ------ | -| name | The name of this threat feed. | Lower-case alphanumeric with optional `-` | string | -| labels | A set of labels to apply to this threat feed. | | map | - -### Spec[​](#spec) - -| Field | Description | Accepted Values | Schema | Default | -| ---------------- | ---------------------------------------------------- | ---------------------- | --------------------------------------------- | ------- | -| content | What kind of threat intelligence is provided | IPSet, DomainNameSet | string | IPSet | -| mode | Determines if the threat feed is Enabled or Disabled | Enabled, Disabled | string | Enabled | -| description | Human-readable description of the template | Maximum 256 characters | string | | -| feedType | Distinguishes Builtin threat feeds from Custom feeds | Builtin, Custom | string | Custom | -| globalNetworkSet | Include to sync with a global network set | | [GlobalNetworkSetSync](#globalnetworksetsync) | | -| pull | Configure periodic pull of threat feed updates | | [Pull](#pull) | | - -### Status[​](#status) - -The `status` is read-only for users and updated by the `intrusion-detection-controller` component as it processes global threat feeds. - -| Field | Description | -| -------------------- | -------------------------------------------------------------------------------- | -| lastSuccessfulSync | Timestamp of the last successful update to the threat intelligence from the feed | -| lastSuccessfulSearch | Timestamp of the last successful search of logs for threats | -| errorConditions | List of errors preventing operation of the updates or search | - -### GlobalNetworkSetSync[​](#globalnetworksetsync) - -When you include a `globalNetworkSet` stanza in a global threat feed, it triggers synchronization with a [global network set](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset). This global network set will have the name `threatfeed.` where `` is the name of the global threat feed it is synced with. This is only supported for threat feeds of type IPSet. - -> **SECONDARY:** A `globalNetworkSet` stanza only works for `IPSet` threat feeds, and you must also include a `pull` stanza. - -| Field | Description | Accepted Values | Schema | -| ------ | --------------------------------------------------------- | --------------- | ------ | -| labels | A set of labels to apply to the synced global network set | | map | - -### Pull[​](#pull) - -When you include a `pull` stanza in a global threat feed, it triggers a periodic pull of new data. On successful pull and update to the data store, we update the `status.lastSuccessfulSync` timestamp. - -If you do not include a `pull` stanza, you must configure your system to [push](#push-or-pull) updates. +kind: StagedKubernetesNetworkPolicy -| Field | Description | Accepted Values | Schema | Default | -| ------ | ------------------------------------- | --------------- | ------------------------------------------------------------- | ------- | -| period | How often to pull an update | ≥ 5m | [Duration string](https://golang.org/pkg/time/#ParseDuration) | 24h | -| http | Pull the update from an HTTP endpoint | | [HTTPPull](#httppull) | | +metadata: -### HTTPPull[​](#httppull) + name: test-network-policy -Pull updates from the threat feed by doing an HTTP GET against the given URL. + namespace: default -| Field | Description | Accepted Values | Schema | -| ------- | --------------------------------------------------------- | --------------- | ------------------------- | -| format | Format of the data the threat feed returns | | [Format](#format) | -| url | The URL to query | | string | -| headers | List of additional HTTP Headers to include on the request | | [HTTPHeader](#httpheader) | +spec: -IPSet threat feeds must contain IP addresses or IP prefixes. For example: + podSelector: -```text - This is an IP Prefix + matchLabels: -100.100.100.0/24 + role: db - This is an address + policyTypes: -99.99.99.99 -``` + - Ingress -DomainNameSet threat feeds must contain domain names. For example: + - Egress -```text - Suspicious domains + ingress: -malware.badstuff + - from: -hackers.r.us -``` + - ipBlock: -Internationalized domain names (IDNA) may be encoded either as Unicode in UTF-8 format, or as ASCII-Compatible Encoding (ACE) according to [RFC 5890](https://tools.ietf.org/html/rfc5890). + cidr: 172.17.0.0/16 -### Format[​](#format) + except: -Several different feed formats are supported. The default, `newlineDelimited`, expects a text file containing entries separated by newline characters. It may also include comments prefixed by `#`. `json` uses a [jsonpath](https://goessner.net/articles/JsonPath/) to extract the desired information from a JSON document. `csv` extracts one column from CSV-formatted data. + - 172.17.1.0/24 -| Field | Description | Schema | -| ---------------- | --------------------------- | ------------- | -| newlineDelimited | Newline-delimited text file | Empty object | -| json | JSON object | [JSON](#json) | -| csv | CSV file | [CSV](#csv) | + - namespaceSelector: -#### JSON[​](#json) + matchLabels: -| Field | Description | Schema | -| ----- | ---------------------------------------------------------------------- | ------ | -| path | [jsonpath](https://goessner.net/articles/JsonPath/) to extract values. | string | + project: myproject -Values can be extracted from the document using any [jsonpath](https://goessner.net/articles/JsonPath/) expression, subject to the limitations mentioned below, that evaluates to a list of strings. For example: `$.` is valid for `["a", "b", "c"]`, and `$.a` is valid for `{"a": ["b", "c"]}`. + - podSelector: -> **WARNING:** No support for subexpressions and filters. Strings in brackets must use double quotes. It cannot operate on JSON decoded struct fields. + matchLabels: -#### CSV[​](#csv) + role: frontend -| Field | Description | Schema | -| --------------------------- | ------------------------------------------------------------------------- | ------ | -| fieldNum | Number of column containing values. Mutually exclusive with `fieldName`. | int | -| fieldName | Name of column containing values, requires `header: true`. | string | -| header | Whether or not the document contains a header row. | bool | -| columnDelimiter | An alternative delimiter character, such as `\|`. | string | -| commentDelimiter | Lines beginning with this character are skipped. `#` is common. | string | -| recordSize | The number of columns expected in the document. Auto detected if omitted. | int | -| disableRecordSizeValidation | Disable row size checking. Mutually exclusive with `recordSize`. | bool | + ports: -### HTTPHeader[​](#httpheader) + - protocol: TCP -| Field | Description | Schema | -| --------- | --------------------------------------------------------- | ------------------------------------- | -| name | Header name | string | -| value | Literal value | string | -| valueFrom | Include to retrieve the value from a config map or secret | [HTTPHeaderSource](#httpheadersource) | + port: 6379 -> **SECONDARY:** You must include either `value` or `valueFrom`, but not both. + egress: -### HTTPHeaderSource[​](#httpheadersource) + - to: -| Field | Description | Schema | -| --------------- | ------------------------------- | ----------------- | -| configMapKeyRef | Get the value from a config map | [KeyRef](#keyref) | -| secretKeyRef | Get the value from a secret | [KeyRef](#keyref) | + - ipBlock: -### KeyRef[​](#keyref) + cidr: 10.0.0.0/24 -KeyRef tells Calico Enterprise where to get the value for a header. The referenced Kubernetes object (either a config map or a secret) must be in the `tigera-intrusion-detection` namespace. The referenced Kubernetes object should have a name with following prefix format: `globalthreatfeed--`. + ports: -| Field | Description | Accepted Values | Schema | Default | -| -------- | --------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | ------ | ------- | -| name | The name of the config map or secret | | string | | -| key | The key within the config map or secret | | string | | -| optional | Whether the pull can proceed without the referenced value | If the referenced value does not exist, `true` means omit the header. `false` means abort the entire pull until it exists | bool | `false` | + - protocol: TCP -### Host endpoint + port: 5978 +``` - +## Definition[​](#definition) -A host endpoint resource (`HostEndpoint`) represents one or more real or virtual interfaces attached to a host that is running Calico Enterprise. It enforces Calico Enterprise policy on the traffic that is entering or leaving the host's default network namespace through those interfaces. +See the [Kubernetes NetworkPolicy documentation](https://v1-21.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#networkpolicyspec-v1-networking-k8s-io) for more information. -- A host endpoint with `interfaceName: *` represents *all* of a host's real or virtual interfaces. +### Staged network policy -- A host endpoint for one specific real interface is configured by `interfaceName: `, for example `interfaceName: eth0`, or by leaving `interfaceName` empty and including one of the interface's IPs in `expectedIPs`. + -Each host endpoint may include a set of labels and list of profiles that Calico Enterprise will use to apply [policy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) to the interface. If no profiles or labels are applied, Calico Enterprise will not apply any policy. + -For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `hostendpoint.projectcalico.org`, `hostendpoints.projectcalico.org` and abbreviations such as `hostendpoint.p` and `hostendpoints.p`. + -**Default behavior of external traffic to/from host** + -If a host endpoint is created and network policy is not in place, the Calico Enterprise default is to deny traffic to/from that endpoint (except for traffic allowed by failsafe rules). For a named host endpoint (i.e. a host endpoint representing a specific interface), Calico Enterprise blocks traffic only to/from the interface specified in the host endpoint. Traffic to/from other interfaces is ignored. + -> **SECONDARY:** Host endpoints with `interfaceName: *` do not support [untracked policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/high-connection-workloads). + -For a wildcard host endpoint (i.e. a host endpoint representing all of a host's interfaces), Calico Enterprise blocks traffic to/from *all* interfaces on the host (except for traffic allowed by failsafe rules). + -However, profiles can be used in conjunction with host endpoints to modify default behavior of external traffic to/from the host in the absence of network policy. Calico Enterprise provides a default profile resource named `projectcalico-default-allow` that consists of allow-all ingress and egress rules. Host endpoints with the `projectcalico-default-allow` profile attached will have "allow-all" semantics instead of "deny-all" in the absence of policy. +A staged network policy resource (`StagedNetworkPolicy`) represents an ordered set of rules which are applied to a collection of endpoints that match a [label selector](#selector). These rules are used to preview network behavior and do not enforce network traffic. For enforcing network traffic, see [network policy resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy). -Note: If you have custom iptables rules, using host endpoints with allow-all rules (with no policies) will accept all traffic and therefore bypass those custom rules. +`StagedNetworkPolicy` is a namespaced resource. `StagedNetworkPolicy` in a specific namespace only applies to [workload endpoint resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint) in that namespace. Two resources are in the same namespace if the `namespace` value is set the same on both. See [staged global network policy resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/stagedglobalnetworkpolicy) for staged non-namespaced network policy. -> **SECONDARY:** Auto host endpoints specify the `projectcalico-default-allow` profile so they behave similarly to pod workload endpoints. +`StagedNetworkPolicy` resources can be used to define network connectivity rules between groups of Calico Enterprise endpoints and host endpoints, and take precedence over [profile resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) if any are defined. -> **SECONDARY:** When rendering security rules on other hosts, Calico Enterprise uses the `expectedIPs` field to resolve label selectors to IP addresses. If the `expectedIPs` field is omitted then security rules that use labels will fail to match this endpoint. +StagedNetworkPolicies are organized into [tiers](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), which provide an additional layer of ordering—in particular, note that the `Pass` action skips to the next [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), to enable hierarchical security policy. -**Host to local workload traffic**: Traffic from a host to its workload endpoints (e.g. Kubernetes pods) is always allowed, despite any policy in place. This ensures that `kubelet` liveness and readiness probes always work. +For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `stagednetworkpolicy.projectcalico.org`, `stagednetworkpolicies.projectcalico.org` and abbreviations such as `stagednetworkpolicy.p` and `stagednetworkpolicies.p`. ## Sample YAML[​](#sample-yaml) +This sample policy allows TCP traffic from `frontend` endpoints to port 6379 on `database` endpoints. + ```yaml apiVersion: projectcalico.org/v3 -kind: HostEndpoint +kind: StagedNetworkPolicy metadata: - name: some.name - - labels: + name: internal-access.allow-tcp-6379 - type: production + namespace: production spec: - interfaceName: eth0 + tier: internal-access - node: myhost + selector: role == 'database' - expectedIPs: + types: - - 192.168.0.1 + - Ingress - - 192.168.0.2 + - Egress - profiles: + ingress: - - profile1 + - action: Allow - - profile2 + protocol: TCP - ports: + source: - - name: some-port + selector: role == 'frontend' - port: 1234 + destination: - protocol: TCP + ports: - - name: another-port + - 6379 - port: 5432 + egress: - protocol: UDP + - action: Allow ``` -## Host endpoint definition[​](#host-endpoint-definition) +## Definition[​](#definition) ### Metadata[​](#metadata) -| Field | Description | Accepted Values | Schema | -| ------ | ------------------------------------------ | --------------------------------------------------- | ------ | -| name | The name of this hostEndpoint. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | -| labels | A set of labels to apply to this endpoint. | | map | +| Field | Description | Accepted Values | Schema | Default | +| --------- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ | --------- | +| name | The name of the network policy. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | | +| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" | ### Spec[​](#spec) -| Field | Description | Accepted Values | Schema | Default | -| ------------- | -------------------------------------------------------------------------- | -------------------------- | -------------------------------------- | ------- | -| node | The name of the node where this HostEndpoint resides. | | string | | -| interfaceName | Either `*` or the name of the specific interface on which to apply policy. | | string | | -| expectedIPs | The expected IP addresses associated with the interface. | Valid IPv4 or IPv6 address | list | | -| profiles | The list of profiles to apply to the endpoint. | | list | | -| ports | List of named ports that this workload exposes. | | List of [EndpointPorts](#endpointport) | | +| Field | Description | Accepted Values | Schema | Default | +| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------- | --------------------- | --------------------------------------------- | +| order | Controls the order of precedence. Calico Enterprise applies the policy with the lowest value first. | | float | | +| tier | Name of the [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier) this policy belongs to. | | string | `default` | +| selector | Selects the endpoints to which this policy applies. | | [selector](#selector) | all() | +| types | Applies the policy based on the direction of the traffic. To apply the policy to inbound traffic, set to `Ingress`. To apply the policy to outbound traffic, set to `Egress`. To apply the policy to both, set to `Ingress, Egress`. | `Ingress`, `Egress` | List of strings | Depends on presence of ingress/egress rules\* | +| ingress | Ordered list of ingress rules applied by policy. | | List of [Rule](#rule) | | +| egress | Ordered list of egress rules applied by this policy. | | List of [Rule](#rule) | | +| serviceAccountSelector | Selects the service account(s) to which this policy applies. Select a specific service account by name using the `projectcalico.org/name` label. | | [selector](#selector) | all() | +| performanceHints | Contains a list of hints to Calico's policy engine to help process the policy more efficiently. Hints never change the enforcement behaviour of the policy. The available hints are described [below](#performance-hints). | `AssumeNeededOnEveryNode` | List of strings | | -### EndpointPort[​](#endpointport) +\* If `types` has no value, Calico Enterprise defaults as follows. -An EndpointPort associates a name with a particular TCP/UDP/SCTP port of the endpoint, allowing it to be referenced as a named port in [policy rules](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy#entityrule). +> | Ingress Rules Present | Egress Rules Present | `Types` value | +> | --------------------- | -------------------- | ----------------- | +> | No | No | `Ingress` | +> | Yes | No | `Ingress` | +> | No | Yes | `Egress` | +> | Yes | Yes | `Ingress, Egress` | -| Field | Description | Accepted Values | Schema | Default | -| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------- | ------ | ------- | -| name | The name to attach to this port, allowing it to be referred to in [policy rules](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy#entityrule). Names must be unique within an endpoint. | | string | | -| protocol | The protocol of this named port. | `TCP`, `UDP`, `SCTP` | string | | -| port | The workload port number. | `1`-`65535` | int | | +### Rule[​](#rule) -> **SECONDARY:** On their own, EndpointPort entries don't result in any change to the connectivity of the port. They only have an effect if they are referred to in policy. +A single rule matches a set of packets and applies some action to them. When multiple rules are specified, they are executed in order. -## Supported operations[​](#supported-operations) +| Field | Description | Accepted Values | Schema | Default | +| ----------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------ | ----------------------------- | ------- | +| metadata | Per-rule metadata. | | [RuleMetadata](#rulemetadata) | | +| action | Action to perform when matching this rule. | `Allow`, `Deny`, `Log`, `Pass` | string | | +| protocol | Positive protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | +| notProtocol | Negative protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | +| icmp | ICMP match criteria. | | [ICMP](#icmp) | | +| notICMP | Negative match on ICMP. | | [ICMP](#icmp) | | +| ipVersion | Positive IP version match. | `4`, `6` | integer | | +| source | Source match parameters. | | [EntityRule](#entityrule) | | +| destination | Destination match parameters. | | [EntityRule](#entityrule) | | +| http | Match HTTP request parameters. Application layer policy must be enabled to use this field. | | [HTTPMatch](#httpmatch) | | -| Datastore type | Create/Delete | Update | Get/List | Notes | -| --------------------- | ------------- | ------ | -------- | ----- | -| Kubernetes API server | Yes | Yes | Yes | | +After a `Log` action, processing continues with the next rule; `Allow` and `Deny` are immediate and final and no further rules are processed. -### IP pool +An `action` of `Pass` in a `NetworkPolicy` or `GlobalNetworkPolicy` will skip over the remaining policies and jump to the first [profile](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) assigned to the endpoint, applying the policy configured in the profile; if there are no Profiles configured for the endpoint the default applied action is `Deny`. - +### RuleMetadata[​](#rulemetadata) -An IP pool resource (`IPPool`) represents a collection of IP addresses from which Calico Enterprise expects endpoint IPs to be assigned. +Metadata associated with a specific rule (rather than the policy as a whole). The contents of the metadata does not affect how a rule is interpreted or enforced; it is simply a way to store additional information for use by operators or applications that interact with Calico Enterprise. -For `kubectl` commands, the following case-insensitive aliases may be used to specify the resource type on the CLI: `ippool.projectcalico.org`, `ippools.projectcalico.org` as well as abbreviations such as `ippool.p` and `ippools.p`. +| Field | Description | Schema | Default | +| ----------- | ----------------------------------- | ----------------------- | ------- | +| annotations | Arbitrary non-identifying metadata. | map of string to string | | -## Sample YAML[​](#sample-yaml) +Example: ```yaml -apiVersion: projectcalico.org/v3 - -kind: IPPool - metadata: - name: my.ippool-1 - -spec: - - cidr: 10.1.0.0/16 - - ipipMode: CrossSubnet - - natOutgoing: true - - disabled: false - - nodeSelector: all() - - allowedUses: + annotations: - - Workload + app: database - - Tunnel + owner: devops ``` -## IP pool definition[​](#ip-pool-definition) +Annotations follow the [same rules as Kubernetes for valid syntax and character set](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set). -### Metadata[​](#metadata) +On Linux with the iptables data plane, rule annotations are rendered as comments in the form `-m comment --comment "="` on the iptables rule(s) that correspond to the Calico Enterprise rule. -| Field | Description | Accepted Values | Schema | -| ----- | ------------------------------------------- | --------------------------------------------------- | ------ | -| name | The name of this IPPool resource. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | +### ICMP[​](#icmp) -### Spec[​](#spec) +| Field | Description | Accepted Values | Schema | Default | +| ----- | ------------------- | -------------------- | ------- | ------- | +| type | Match on ICMP type. | Can be integer 0-254 | integer | | +| code | Match on ICMP code. | Can be integer 0-255 | integer | | -| Field | Description | Accepted Values | Schema | Default | -| ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- | --------------------------------------------- | -| cidr | IP range to use for this pool. | A valid IPv4 or IPv6 CIDR. Subnet length must be at least big enough to fit a single block (by default `/26` for IPv4 or `/122` for IPv6). Must not overlap with the Link Local range `169.254.0.0/16` or `fe80::/10`. | string | | -| blockSize | The CIDR size of allocation blocks used by this pool. Blocks are allocated on demand to hosts and are used to aggregate routes. The value can only be set when the pool is created. | 20 to 32 (inclusive) for IPv4 and 116 to 128 (inclusive) for IPv6 | int | `26` for IPv4 pools and `122` for IPv6 pools. | -| ipipMode | The mode defining when IPIP will be used. Cannot be set at the same time as `vxlanMode`. | Always, CrossSubnet, Never | string | `Never` | -| vxlanMode | The mode defining when VXLAN will be used. Cannot be set at the same time as `ipipMode`. | Always, CrossSubnet, Never | string | `Never` | -| natOutgoing | When enabled, packets sent from Calico Enterprise networked containers in this pool to destinations outside of any Calico IP pools will be masqueraded. | true, false | boolean | `false` | -| disabled | When set to true, Calico Enterprise IPAM will not assign addresses from this pool. | true, false | boolean | `false` | -| disableBGPExport *(since v3.11.0)* | Disable exporting routes from this IP Pool’s CIDR over BGP. | true, false | boolean | `false` | -| nodeSelector | Selects the nodes where Calico Enterprise IPAM should assign pod addresses from this pool. Can be overridden if a pod [explicitly identifies this IP pool by annotation](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration#using-kubernetes-annotations). | | [selector](#node-selector) | all() | -| allowedUses *(since v3.11.0)* | Controls whether the pool will be used for automatic assignments of certain types. See [below](#allowed-uses). | Workload, Tunnel, HostSecondaryInterface, LoadBalancer | list of strings | `["Workload", "Tunnel"]` | -| awsSubnetID *(since v3.11.0)* | May be set to the ID of an AWS VPC Subnet that contains the CIDR of this IP pool to activate the AWS-backed pool feature. See [below](#aws-backed-pools). | Valid AWS Subnet ID. | string | | -| assignmentMode | Controls whether the pool will be used for automatic assignments or only if requested manually | Automatic, Manual | strings | `Automatic` | +### EntityRule[​](#entityrule) -> **SECONDARY:** Do not use a custom `blockSize` until **all** Calico Enterprise components have been updated to a version that supports it (at least v2.3.0). Older versions of components do not understand the field so they may corrupt the IP pool by creating blocks of incorrect size. +Entity rules specify the attributes of the source or destination of a packet that must match for the rule as a whole to match. Packets can be matched on combinations of: -### Allowed uses[​](#allowed-uses) +- Identity of the source/destination, by using [Selectors](#selectors) or by specifying a particular Kubernetes `Service`. Selectors can match [workload endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint), [host endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint) and ([namespaced](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkset) or [global](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset)) network sets. +- Source/destination IP address, protocol and port. -When automatically assigning IP addresses to workloads, only pools with "Workload" in their `allowedUses` field are consulted. Similarly, when assigning IPs for tunnel devices, only "Tunnel" pools are eligible. Finally, when assigning IP addresses for AWS secondary ENIs, only pools with allowed use "HostSecondaryInterface" are candidates. +If the rule contains multiple match criteria (for example, an IP and a port) then all match criteria must match for the rule as a whole to match a packet. -Combining options for the `allowedUses` field is limited. You can specify only the following options and option combinations: +| Field | Description | Accepted Values | Schema | Default | +| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------------------------------------------- | ------- | +| nets | Match packets with IP in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs | | +| notNets | Negative match on CIDRs. Match packets with IP not in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs | | +| selector | Positive match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | | +| notSelector | Negative match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | | +| namespaceSelector | Positive match on selected namespaces. If specified, only workload endpoints in the selected Kubernetes namespaces are matched. Matches namespaces based on the labels that have been applied to the namespaces. Defines the scope that selectors will apply to, if not defined then selectors apply to the NetworkPolicy's namespace. Match a specific namespace by name using the `projectcalico.org/name` label. Select the non-namespaced resources like GlobalNetworkSet(s), host endpoints to which this policy applies by using `global()` selector. | Valid selector | [selector](#selector) | | +| ports | Positive match on the specified ports | | list of [ports](#ports) | | +| domains | Positive match on [domain names](#exact-and-wildcard-domain-names). | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list of strings | | +| notPorts | Negative match on the specified ports | | list of [ports](#ports) | | +| serviceAccounts | Match endpoints running under service accounts. If a `namespaceSelector` is also defined, the set of service accounts this applies to is limited to the service accounts in the selected namespaces. | | [ServiceAccountMatch](#serviceaccountmatch) | | +| services | Match the specified service(s). If specified on egress rule destinations, no other selection criteria can be set. If specified on ingress rule sources, only positive or negative matches on ports can be specified. | | [ServiceMatch](#servicematch) | | -- `allowedUses: ["Tunnel","Workload"]` (default) -- `allowedUses: ["Tunnel"]` -- `allowedUses: ["Workload"]` -- `allowedUses: ["LoadBalancer"]` -- `allowedUses: ["HostSecondaryInterface"]` +> **SECONDARY:** You cannot mix IPv4 and IPv6 CIDRs in a single rule using `nets` or `notNets`. If you need to match both, create 2 rules. -If the `allowedUses` field is not specified, it defaults to `["Workload", "Tunnel"]` for compatibility with older versions of Calico. It is not possible to specify a pool with no allowed uses. +#### Selector performance in EntityRules[​](#selector-performance-in-entityrules) -The `allowedUses` field is only consulted for new allocations, changing the field has no effect on previously allocated addresses. +When rendering policy into the data plane, Calico Enterprise must identify the endpoints that match the selectors in all active rules. This calculation is optimized for certain common selector types. Using the optimized selector types reduces CPU usage (and policy rendering time) by orders of magnitude. This becomes important at high scale (hundreds of active rules, hundreds of thousands of endpoints). -Calico Enterprise supports Kubernetes [annotations that force the use of specific IP addresses](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration#requesting-a-specific-ip-address). These annotations take precedence over the `allowedUses` field. +The optimized operators are as follows: -### AWS-backed pools[​](#aws-backed-pools) +- `label == "value"` +- `label in { 'v1', 'v2' }` +- `has(label)` +- ` && ` is optimized if **either** `` or `` is optimized. -Calico Enterprise supports IP pools that are backed by the AWS fabric. This feature was added in order to support egress gateways on the AWS fabric; the restrictions and requirements are currently documented as part of the [egress gateways on AWS guide](https://docs.tigera.io/calico-enterprise/latest/networking/egress/egress-gateway-aws). +The following perform like `has(label)`. All endpoints with the label will be scanned to find matches: -### IPIP[​](#ipip) +- `label contains 's'` +- `label starts with 's'` +- `label ends with 's'` -Routing of packets using IP-in-IP will be used when the destination IP address is in an IP Pool that has IPIP enabled. In addition, if the `ipipMode` is set to `CrossSubnet`, Calico Enterprise will only route using IP-in-IP if the IP address of the destination node is in a different subnet. The subnet of each node is configured on the node resource (which may be automatically determined when running the `node` service). +The other operators, and in particular, `all()`, `!`, `||` and `!=` are not optimized. -For details on configuring IP-in-IP on your deployment, please refer to [Configuring IP-in-IP](https://docs.tigera.io/calico-enterprise/latest/networking/configuring/vxlan-ipip). +Examples: -> **SECONDARY:** Setting `natOutgoing` is recommended on any IP Pool with `ipip` enabled. When `ipip` is enabled without `natOutgoing` routing between Workloads and Hosts running Calico Enterprise is asymmetric and may cause traffic to be filtered due to [RPF](https://en.wikipedia.org/wiki/Reverse_path_forwarding) checks failing. +- `a == 'b'` - optimized +- `a == 'b' && has(c)` - optimized +- `a == 'b' || has(c)` - **not** optimized due to use of `||` +- `c != 'd'` - **not** optimized due to use of `!=` +- `!has(a)` - **not** optimized due to use of `!` +- `a == 'b' && c != 'd'` - optimized, `a =='b'` is optimized so `a == 'b' && ` is optimized. +- `c != 'd' && a == 'b'` - optimized, `a =='b'` is optimized so ` && a == 'b'` is optimized. -### VXLAN[​](#vxlan) +### Exact and wildcard domain names[​](#exact-and-wildcard-domain-names) -Routing of packets using VXLAN will be used when the destination IP address is in an IP Pool that has VXLAN enabled. In addition, if the `vxlanMode` is set to `CrossSubnet`, Calico Enterprise will only route using VXLAN if the IP address of the destination node is in a different subnet. The subnet of each node is configured on the node resource (which may be automatically determined when running the `node` service). +The `domains` field is only valid for egress Allow rules. It restricts the rule to apply only to traffic to one of the specified domains. If this field is specified, the parent [Rule](#rule)'s `action` must be `Allow`, and `nets` and `selector` must both be left empty. -> **SECONDARY:** Setting `natOutgoing` is recommended on any IP Pool with `vxlan` enabled. When `vxlan` is enabled without `natOutgoing` routing between Workloads and Hosts running Calico Enterprise is asymmetric and may cause traffic to be filtered due to [RPF](https://en.wikipedia.org/wiki/Reverse_path_forwarding) checks failing. +When a configured domain name has no wildcard (`*`), it matches exactly that domain name. For example: -### Block sizes[​](#block-sizes) +- `microsoft.com` +- `tigera.io` -The default block sizes of `26` for IPv4 and `122` for IPv6 provide blocks of 64 addresses. This allows addresses to be allocated in groups to workloads running on the same host. By grouping addresses, fewer routes need to be exchanged between hosts and to other BGP peers. If a host allocates all of the addresses in a block then it will be allocated an additional block. If there are no more blocks available then the host can take addresses from blocks allocated to other hosts. Specific routes are added for the borrowed addresses which has an impact on route table size. +With a single asterisk in any part of the domain name, it matches 1 or more path components at that position. For example: -Increasing the block size from the default (e.g., using `24` for IPv4 to give 256 addresses per block) means fewer blocks per host, and potentially fewer routes. But try to ensure that there are at least as many blocks in the pool as there are hosts. +- `*.google.com` matches `www.google.com` and `www.ipv6.google.com`, but not `google.com` +- `www.*.com` matches `www.sun.com` and `www.apple.com`, but not `www.com` +- `update.*.mycompany.com` matches `update.tools.mycompany.com`, `update.secure.suite.mycompany.com`, and so on -Reducing the block size from the default (e.g., using `28` for IPv4 to give 16 addresses per block) means more blocks per host and therefore potentially more routes. This can be beneficial if it allows the blocks to be more fairly distributed amongst the hosts. +**Not** supported are: -### Node Selector[​](#node-selector) +- Multiple wildcards in the same domain, for example: `*.*.mycompany.com` +- Asterisks that are not the entire component, for example: `www.g*.com` +- A wildcard as the last component, for example: `www.mycompany.*` +- More general wildcards, such as regular expressions -For details on configuring IP pool node selectors, please read the [Assign IP addresses based on topology guide.](https://docs.tigera.io/calico-enterprise/latest/networking/ipam/assign-ip-addresses-topology). +> **SECONDARY:** Calico Enterprise implements policy for domain names by learning the corresponding IPs from DNS, then programming rules to allow those IPs. This means that if multiple domain names A, B and C all map to the same IP, and there is domain-based policy to allow A, traffic to B and C will be allowed as well. -#### Selector reference[​](#selector-reference) +### Selector[​](#selector) A label selector is an expression which either matches or does not match a resource based on its labels. @@ -65200,7202 +66444,7883 @@ It would match: - Has a value for `my-label` that starts with "prod", and, - Has a role label with value either "frontend", or "business". -## Supported operations[​](#supported-operations) +### Ports[​](#ports) -| Datastore type | Create/Delete | Update | Get/List | Notes | -| --------------------- | ------------- | ------ | -------- | ----- | -| Kubernetes API server | Yes | Yes | Yes | | +Calico Enterprise supports the following syntaxes for expressing ports. -## See also[​](#see-also) +| Syntax | Example | Description | +| --------- | ---------- | ------------------------------------------------------------------- | +| int | 80 | The exact (numeric) port specified | +| start:end | 6040:6050 | All (numeric) ports within the range start ≤ x ≤ end | +| string | named-port | A named port, as defined in the ports list of one or more endpoints | -The [`IPReservation` resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/ipreservation) allows for small parts of an IP pool to be reserved so that they will not be used for automatic IPAM assignments. +An individual numeric port may be specified as a YAML/JSON integer. A port range or named port must be represented as a string. For example, this would be a valid list of ports: -### IP reservation +```yaml +ports: [8080, '1234:5678', 'named-port'] +``` -An IP reservation resource (`IPReservation`) represents a collection of IP addresses that Calico Enterprise should not use when automatically assigning new IP addresses. It only applies when Calico Enterprise IPAM is in use. +#### Named ports[​](#named-ports) -## Sample YAML[​](#sample-yaml) +Using a named port in an `EntityRule`, instead of a numeric port, gives a layer of indirection, allowing for the named port to map to different numeric values for each endpoint. -```yaml -apiVersion: projectcalico.org/v3 +For example, suppose you have multiple HTTP servers running as workloads; some exposing their HTTP port on port 80 and others on port 8080. In each workload, you could create a named port called `http-port` that maps to the correct local port. Then, in a rule, you could refer to the name `http-port` instead of writing a different rule for each type of server. -kind: IPReservation +> **SECONDARY:** Since each named port may refer to many endpoints (and Calico Enterprise has to expand a named port into a set of endpoint/port combinations), using a named port is considerably more expensive in terms of CPU than using a simple numeric port. We recommend that they are used sparingly, only where the extra indirection is required. -metadata: +### ServiceAccountMatch[​](#serviceaccountmatch) - name: my-ipreservation-1 +A ServiceAccountMatch matches service accounts in an EntityRule. -spec: +| Field | Description | Schema | +| -------- | ------------------------------- | --------------------- | +| names | Match service accounts by name | list of strings | +| selector | Match service accounts by label | [selector](#selector) | - reservedCIDRs: +### ServiceMatch[​](#servicematch) - - 192.168.2.3 +A ServiceMatch matches a service in an EntityRule. - - 10.0.2.3/32 +| Field | Description | Schema | +| --------- | ------------------------ | ------ | +| name | The service's name. | string | +| namespace | The service's namespace. | string | - - cafe:f00d::/123 -``` +### Performance Hints[​](#performance-hints) -## IP reservation definition[​](#ip-reservation-definition) +Performance hints provide a way to tell Calico Enterprise about the intended use of the policy so that it may process it more efficiently. Currently only one hint is defined: -### Metadata[​](#metadata) +- `AssumeNeededOnEveryNode`: normally, Calico Enterprise only calculates a policy's rules and selectors on nodes where the policy is actually in use (i.e. its selector matches a local endpoint). This saves work in most cases. The `AssumeNeededOnEveryNode` hint tells Calico Enterprise to treat the policy as "in use" on *every* node. This is useful for large policy sets that are known to apply to all (or nearly all) endpoints. It effectively "preloads" the policy on every node so that there is less work to do when the first endpoint matching the policy shows up. It also prevents work from being done to tear down the policy when the last endpoint is drained. -| Field | Description | Accepted Values | Schema | -| ----- | -------------------------------------------------- | --------------------------------------------------- | ------ | -| name | The name of this IPReservation resource. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | +## Supported operations[​](#supported-operations) -### Spec[​](#spec) +| Datastore type | Create/Delete | Update | Get/List | Notes | +| ------------------------ | ------------- | ------ | -------- | ----- | +| Kubernetes API datastore | Yes | Yes | Yes | | -| Field | Description | Accepted Values | Schema | Default | -| ------------- | --------------------------------------------------------------- | -------------------------------------------------- | ------ | ------- | -| reservedCIDRs | List of IP addresses and/or networks specified in CIDR notation | List of valid IP addresses (v4 or v6) and/or CIDRs | list | | +#### List filtering on tiers[​](#list-filtering-on-tiers) -### Notes[​](#notes) +List and watch operations may specify label selectors or field selectors to filter `StagedNetworkPolicy` resources on tiers returned by the API server. When no selector is specified, the API server returns all `StagedNetworkPolicy` resources from all tiers that the user has access to. -The implementation of `IPReservation`s is designed to handle reservation of a small number of IP addresses/CIDRs from (generally much larger) IP pools. If a significant portion of an IP pool is reserved (say more than 10%) then Calico Enterprise may become significantly slower when searching for free IPAM blocks. +##### Field selector[​](#field-selector) -Since `IPReservations` must be consulted for every IPAM assignment request, it's best to have one or two `IPReservation` resources with multiple addresses per `IPReservation` resource (rather than having many IPReservation resources), each with one address inside. +When using the field selector, supported operators are `=` and `==` -If an `IPReservation` is created after an IP from its range is already in use then the IP is not automatically released back to the pool. The reservation check is only done at auto allocation time. +The following example shows how to retrieve all `StagedNetworkPolicy` resources in the default tier and in all namespaces: -Calico Enterprise supports Kubernetes [annotations that force the use of specific IP addresses](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration#requesting-a-specific-ip-address). These annotations override any `IPReservation`s that are in place. +```bash +kubectl get stagednetworkpolicy.p --field-selector spec.tier=default --all-namespaces +``` -When Windows nodes claim blocks of IPs they automatically assign the first three IPs in each block and the final IP for internal purposes. These assignments cannot be blocked by an `IPReservation`. However, if a whole IPAM block is reserved with an `IPReservation`, Windows nodes will not claim such a block. +##### Label selector[​](#label-selector) -### IPAM configuration +When using the label selector, supported operators are `=`, `==` and `IN`. -An IPAM configuration resource (`IPAMConfiguration`) represents global IPAM configuration options. +The following example shows how to retrieve all `StagedNetworkPolicy` resources in the `default` and `net-sec` tiers and in all namespaces: -## Sample YAML[​](#sample-yaml) +```bash +kubectl get stagednetworkpolicy.p -l 'projectcalico.org/tier in (default, net-sec)' --all-namespaces +``` -```yaml -apiVersion: projectcalico.org/v3 +### Tier -kind: IPAMConfiguration +A tier resource (`Tier`) represents an ordered collection of [NetworkPolicies](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) and/or [GlobalNetworkPolicies](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy). Tiers are used to divide these policies into groups of different priorities. These policies are ordered within a Tier: the additional hierarchy of Tiers provides more flexibility because the `Pass` `action` in a Rule jumps to the next Tier. Some example use cases for this are. -metadata: +- Allowing privileged users to define security policy that takes precedence over other users. +- Translating hierarchies of physical firewalls directly into Calico Enterprise policy. - name: default +For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `tier.projectcalico.org`, `tiers.projectcalico.org` and abbreviations such as `tier.p` and `tiers.p`. -spec: +## How policy is evaluated[​](#how-policy-is-evaluated) - strictAffinity: false +When a new connection is processed by Calico Enterprise, each tier that contains a policy that applies to the endpoint processes the packet. Tiers are sorted by their `order` - smallest number first. - maxBlocksPerHost: 4 -``` +Policies in each Tier are then processed in order. -## IPAM configuration definition[​](#ipam-configuration-definition) +- If a [NetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) or [GlobalNetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy) in the Tier `Allow`s or `Deny`s the packet, then evaluation is done: the packet is handled accordingly. +- If a [NetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) or [GlobalNetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy) in the Tier `Pass`es the packet, the next Tier containing a Policy that applies to the endpoint processes the packet. -### Metadata[​](#metadata) +If the Tier applies to the endpoint, but takes no action on the packet the packet is dropped. This behaviour can be changed by setting the `defaultAction` of a tier to `Pass`. -| Field | Description | Accepted Values | Schema | -| ----- | --------------------------------------------------------- | --------------- | ------ | -| name | Unique name to describe this resource instance. Required. | default | string | +If the last Tier applying to the endpoint `Pass`es the packet, that endpoint's [Profiles](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) are evaluated. -The resource is a singleton which must have the name `default`. +## Sample YAML[​](#sample-yaml) -### Spec[​](#spec) +```yaml +apiVersion: projectcalico.org/v3 -| Field | Description | Accepted Values | Schema | Default | -| ---------------- | ------------------------------------------------------------------- | --------------- | ------ | ------- | -| strictAffinity | When StrictAffinity is true, borrowing IP addresses is not allowed. | true, false | bool | false | -| maxBlocksPerHost | The max number of blocks that can be affine to each host. | 0 - max(int32) | int | 20 | +kind: Tier -## Supported operations[​](#supported-operations) +metadata: -| Datastore type | Create | Delete | Update | Get/List | -| --------------------- | ------ | ------ | ------ | -------- | -| etcdv3 | Yes | Yes | Yes | Yes | -| Kubernetes API server | Yes | Yes | Yes | Yes | + name: internal-access -### License key +spec: -A License Key resource (`LicenseKey`) represents a user's license to use Calico Enterprise. Keys are provided by Tigera support, and must be applied to the cluster to enable Calico Enterprise features. + order: 100 + + defaultAction: Deny +``` -For `kubectl` commands, the following case-insensitive aliases may be used to specify the resource type on the CLI: `licensekey.projectcalico.org`, `licensekeys.projectcalico.org` as well as abbreviations such as `licensekey.p` and `licensekeys.p`. +## Definition[​](#definition) -## Working with license keys[​](#working-with-license-keys) +### Metadata[​](#metadata) -### Applying or updating a license key[​](#applying-or-updating-a-license-key) +| Field | Description | Accepted Values | Schema | +| ----- | --------------------- | --------------- | ------ | +| name | The name of the tier. | | string | -When you add Calico Enterprise to an existing Kubernetes cluster or create a new OpenShift cluster, you must apply your license key to complete the installation and gain access to the full set of Calico Enterprise features. +### Spec[​](#spec) -In deployments that use multicluster management, a license key is required only on the management cluster. +| Field | Description | Accepted Values | Schema | Default | +| ------------- | ------------------------------------------------------------------------------------------------------------------------------------ | --------------- | ------ | --------------------- | +| order | (Optional) Indicates priority of this Tier, with lower order taking precedence. No value indicates highest order (lowest precedence) | | float | `nil` (highest order) | +| defaultAction | (Optional) Indicates the default action, when this Tier applies to an endpoint, but takes not action on the packet | `Deny`, `Pass` | string | `Deny` | -When your license key expires, you must update it to continue using Calico Enterprise. +All Policies created by Calico Enterprise orchestrator integrations are created in the default (last) Tier. -To apply or update a license key use the following command, replacing `` with the customer name in the file sent to you by Tigera. +### Workload endpoint -**Command** + -```bash -kubectl apply -f -license.yaml -``` +A workload endpoint resource (`WorkloadEndpoint`) represents an interface connecting a Calico Enterprise networked container or VM to its host. -**Example** +Each endpoint may specify a set of labels and list of profiles that Calico Enterprise will use to apply policy to the interface. -```bash -kubectl apply -f awesome-corp-license.yaml -``` +A workload endpoint is a namespaced resource, that means a [NetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) in a specific namespace only applies to the WorkloadEndpoint in that namespace. Two resources are in the same namespace if the namespace value is set the same on both. -### Viewing information about your license key[​](#viewing-information-about-your-license-key) +This resource is not supported in `kubectl`. -To view the number of licensed nodes and the license key expiry, use: +> **SECONDARY:** While `calicoctl` allows the user to fully manage Workload Endpoint resources, the lifecycle of these resources is generally handled by an orchestrator-specific plugin such as the Calico Enterprise CNI plugin. In general, we recommend that you only use `calicoctl` to view this resource type. -```bash -kubectl get licensekeys.p -o custom-columns='Name:.metadata.name,MaxNodes:.status.maxnodes,Expiry:.status.expiry,PackageType:.status.package' -``` +**Multiple networks** -This is an example of the output of above command. +If multiple networks are enabled, workload endpoints will have additional labels which can be used in network policy selectors: -```text -Name MaxNodes Expiry Package +- `projectcalico.org/network`: The name of the network specified in the NetworkAttachmentDefinition. +- `projectcalico.org/network-namespace`: This namespace the network is in. +- `projectcalico.org/network-interface`: The network interface for the workload endpoint. -default 100 2021-10-01T23:59:59Z Enterprise -``` +For more information, see the [multiple-networks how-to guide](https://docs.tigera.io/calico-enterprise/latest/networking/configuring/multiple-networks). ## Sample YAML[​](#sample-yaml) ```yaml apiVersion: projectcalico.org/v3 -kind: LicenseKey +kind: WorkloadEndpoint metadata: - creationTimestamp: null + name: node1-k8s-my--nginx--b1337a-eth0 - name: default + namespace: default + + labels: + + app: frontend + + projectcalico.org/namespace: default + + projectcalico.org/orchestrator: k8s spec: - certificate: | + node: node1 - -----BEGIN CERTIFICATE----- + orchestrator: k8s - MII...n5 + endpoint: eth0 - -----END CERTIFICATE----- + containerID: 1337495556942031415926535 - token: eyJ...zaQ + pod: my-nginx-b1337a -status: + endpoint: eth0 - expiry: '2021-10-01T23:59:59Z' + interfaceName: cali0ef24ba - maxnodes: 100 + mac: ca:fe:1d:52:bb:e9 - package: Enterprise -``` + ipNetworks: -The data fields in the license key resource may change without warning. The license key resource is currently a singleton: the only valid name is `default`. + - 192.168.0.0/32 -## Supported operations[​](#supported-operations) + profiles: -| Datastore type | Create | Delete | Update | Get/List | Notes | -| --------------------- | ------ | ------ | ------ | -------- | ----- | -| Kubernetes API server | Yes | No | Yes | Yes | | + - profile1 -### Kubernetes controllers configuration + ports: -A Calico Enterprise [Kubernetes controllers](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/configuration) configuration resource (`KubeControllersConfiguration`) represents configuration options for the Calico Enterprise Kubernetes controllers. + - name: some-port -## Sample YAML[​](#sample-yaml) + port: 1234 -```yaml -apiVersion: projectcalico.org/v3 + protocol: TCP -kind: KubeControllersConfiguration + - name: another-port -metadata: + port: 5432 - name: default + protocol: UDP +``` -spec: +## Definitions[​](#definitions) - logSeverityScreen: Info +### Metadata[​](#metadata) - healthChecks: Enabled +| Field | Description | Accepted Values | Schema | Default | +| --------- | ------------------------------------------------------------------ | -------------------------------------------------- | ------ | --------- | +| name | The name of this workload endpoint resource. Required. | Alphanumeric string with optional `.`, `_`, or `-` | string | | +| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" | +| labels | A set of labels to apply to this endpoint. | | map | | - prometheusMetricsPort: 9094 +### Spec[​](#spec) - controllers: +| Field | Description | Accepted Values | Schema | Default | +| ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | ---------------------------------------------- | ------- | +| workload | The name of the workload to which this endpoint belongs. | | string | | +| orchestrator | The orchestrator that created this endpoint. | | string | | +| node | The node where this endpoint resides. | | string | | +| containerID | The CNI CONTAINER\_ID of the workload endpoint. | | string | | +| pod | Kubernetes pod name for this workload endpoint. | | string | | +| endpoint | Container network interface name. | | string | | +| ipNetworks | The CIDRs assigned to the interface. | | List of strings | | +| ipNATs | List of 1:1 NAT mappings to apply to the endpoint. | | List of [IPNATs](#ipnat) | | +| awsElasticIPs | List of AWS Elastic IP addresses that should be considered for this workload; only used for workloads in an AWS-backed IP pool. This should be set via the `cni.projectcalico.org/awsElasticIPs` Pod annotation. | | List of valid IP addresses | | +| ipv4Gateway | The gateway IPv4 address for traffic from the workload. | | string | | +| ipv6Gateway | The gateway IPv6 address for traffic from the workload. | | string | | +| profiles | List of profiles assigned to this endpoint. | | List of strings | | +| interfaceName | The name of the host-side interface attached to the workload. | | string | | +| mac | The source MAC address of traffic generated by the workload. | | IEEE 802 MAC-48, EUI-48, or EUI-64 | | +| ports | List on named ports that this workload exposes. | | List of [WorkloadEndpointPorts](#endpointport) | | - node: +### IPNAT[​](#ipnat) - reconcilerPeriod: 5m +IPNAT contains a single NAT mapping for a WorkloadEndpoint resource. - leakGracePeriod: 15m +| Field | Description | Accepted Values | Schema | Default | +| ---------- | ------------------------------------------- | ------------------ | ------ | ------- | +| internalIP | The internal IP address of the NAT mapping. | A valid IP address | string | | +| externalIP | The external IP address. | A valid IP address | string | | - syncLabels: Enabled +### EndpointPort[​](#endpointport) - hostEndpoint: +A WorkloadEndpointPort associates a name with a particular TCP/UDP/SCTP port of the endpoint, allowing it to be referenced as a named port in [policy rules](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy#entityrule). - autoCreate: Disabled +| Field | Description | Accepted Values | Schema | Default | +| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------- | ------ | ------- | +| name | The name to attach to this port, allowing it to be referred to in [policy rules](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy#entityrule). Names must be unique within an endpoint. | | string | | +| protocol | The protocol of this named port. | `TCP`, `UDP`, `SCTP` | string | | +| port | The workload port number. | `1`-`65535` | int | | +| hostPort | Port on the host that is forwarded to this port. | `1`-`65535` | int | | +| hostIP | IP address on the host on which the hostPort is accessible. | `1`-`65535` | int | | - createDefaultHostEndpoint: Enabled +> **SECONDARY:** On their own, WorkloadEndpointPort entries don't result in any change to the connectivity of the port. They only have an effect if they are referred to in policy. - templates: +> **SECONDARY:** The hostPort and hostIP fields are read-only and determined from Kubernetes hostPort configuration. These fields are used only when host ports are enabled in Calico. - - generateName: custom-host-endpoint +## Supported operations[​](#supported-operations) - interfaceCIDRs: +| Datastore type | Create/Delete | Update | Get/List | Notes | +| --------------------- | ------------- | ------ | -------- | -------------------------------------------------------- | +| Kubernetes API server | No | Yes | Yes | WorkloadEndpoints are directly tied to a Kubernetes pod. | - - 1.2.3.0/24 +### Architecture - interfacePattern: "eth0|eth1" + - nodeSelector: "has(my-label)" +## [📄️Component architecture](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/overview) - labels: +[Understand the Calico Enterprise components and the basics of BGP networking.](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/overview) - key: value +## [📄️'The Calico Enterprise data path: IP routing and iptables'](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/data-path) - loadbalancer: +[Learn how packets flow between workloads in a datacenter, or between a workload and the internet.](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/data-path) - assignIPs: AllServices -``` +## [🗃Network design](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/) -## Kubernetes controllers configuration definition[​](#kubernetes-controllers-configuration-definition) +[2 items](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/) -### Metadata[​](#metadata) +### Component architecture -| Field | Description | Accepted Values | Schema | -| ----- | --------------------------------------------------------- | ----------------- | ------ | -| name | Unique name to describe this resource instance. Required. | Must be `default` | string | + -- Calico Enterprise automatically creates a resource named `default` containing the configuration settings, only the name `default` is used and only one object of this type is allowed. You can use [calicoctl](https://docs.tigera.io/calico-enterprise/latest/reference/clis/calicoctl/overview) to view and edit these settings +## About Calico Enterprise architecture[​](#about-calico-enterprise-architecture) -### Spec[​](#spec) +The following diagram shows the components that comprise a Kubernetes on-premises deployment using the Calico Enterprise CNI for networking and network policy. -| Field | Description | Accepted Values | Schema | Default | -| --------------------- | --------------------------------------------------------- | ----------------------------------- | --------------------------- | ------- | -| logSeverityScreen | The log severity above which logs are sent to the stdout. | Debug, Info, Warning, Error, Fatal | string | Info | -| healthChecks | Enable support for health checks | Enabled, Disabled | string | Enabled | -| prometheusMetricsPort | Port on which to serve prometheus metrics. | Set to 0 to disable, > 0 to enable. | TCP port | 9094 | -| controllers | Enabled controllers and their settings | | [Controllers](#controllers) | | +**Tip**: For best visibility, right-click on the image below and select "Open image in new tab" -### Controllers[​](#controllers) +![Architecture](https://docs.tigera.io/img/calico-enterprise/architecture-ee-new.svg) -| Field | Description | Schema | -| ----------------- | ------------------------------------------------------ | ------------------------------------------------------------------------------- | -| node | Enable and configure the node controller | omit to disable, or [NodeController](#nodecontroller) | -| federatedservices | Enable and configure the federated services controller | omit to disable, or [FederatedServicesController](#federatedservicescontroller) | +Calico open-source components are the foundation of Calico Enterprise. Calico Enterprise provides value-added components for visibility and troubleshooting, compliance, policy lifecycle management, threat detection, and multi-cluster management. -### NodeController[​](#nodecontroller) +## Calico Enterprise components[​](#calico-enterprise-components) -The node controller automatically cleans up configuration for nodes that no longer exist. Optionally, it can create host endpoints for all Kubernetes nodes. +- [calicoq](#calicoq) +- [Compliance](#compliance) +- [Linseed API and ES gateway](#linseed-api-and-es-gateway) +- [Intrusion detection](#intrusion-detection) +- [kube-controllers](#kube-controllers) +- [Manager](#manager) +- [Packet capture API](#packet-capture-api) +- [Prometheus API service](#prometheus-api-service) -| Field | Description | Accepted Values | Schema | Default | -| ---------------- | -------------------------------------------------------------------------------------- | ----------------- | ------------------------------------------------------------- | ------- | -| reconcilerPeriod | Period to perform reconciliation with the Calico Enterprise datastore | | [Duration string](https://golang.org/pkg/time/#ParseDuration) | 5m | -| syncLabels | When enabled, Kubernetes node labels will be copied to Calico Enterprise node objects. | Enabled, Disabled | string | Enabled | -| hostEndpoint | Configures the host endpoint controller | | [HostEndpoint](#hostendpoint) | | -| leakGracePeriod | Grace period to use when garbage collecting suspected leaked IP addresses. | | [Duration string](https://golang.org/pkg/time/#ParseDuration) | 15m | +## Bundled third-party components[​](#bundled-third-party-components) -### HostEndpoint[​](#hostendpoint) +- [fluentd](#fluentd) +- [Elasticsearch and Kibana](#elasticsearch-and-kibana) +- [Prometheus](#prometheus) -| Field | Description | Accepted Values | Schema | Default | -| ------------------------- | --------------------------------------------------- | ----------------- | --------------------- | -------- | -| autoCreate | When enabled, automatically create host endpoints | Enabled, Disabled | string | Disabled | -| createDefaultHostEndpoint | When enabled, default host endpoint will be created | Enabled, Disabled | string | Enabled | -| templates | Controls creation of custom host endpoints | | [Template](#template) | | +## Calico open-source components[​](#calico-open-source-components) -### Template[​](#template) +- [API server](#api-server) +- [Felix](#felix) +- [BIRD](#bird) +- [calicoctl](#calicoctl) +- [calico-node](#calico-node) +- [confd](#confd) +- [CNI plugin](#cni-plugin) +- [Datastore plugin](#datastore-plugin) +- [IPAM plugin](#ipam-plugin) +- [Typha](#typha) -| Field | Description | Accepted Values | Schema | Default | -| ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------- | ---------------------------------- | ------- | -| generateName | Unique name used as suffix for host endpoints created based on this template | Alphanumeric string | string | | -| nodeSelector | Selects the nodes for which this template should create host endpoints | | [Selector](#selectors) | all() | -| interfaceCIDRs | This configuration defines which IP addresses from a node's specification (including standard, tunnel, and WireGuard IPs) are eligible for inclusion in the generated HostEndpoint. IP addresses must fall within the provided CIDR ranges to be considered. If no address on the node matches the specified CIDRs, the HostEndpoint creation is skipped. | List of valid CIDRs | List string | | -| interfacePattern | Regex to include matching interfaces and their IPs | string | string | | -| labels | Labels to be added to generated host endpoints matching this template | | map of string key to string values | | +## Kubernetes components[​](#kubernetes-components) -### Selectors[​](#selectors) +- [Kubernetes API server](#kubernetes-api-server) +- [kubectl](#kubectl) -A label selector is an expression which either matches or does not match a resource based on its labels. +## Cloud orchestrator plugins (not pictured)[​](#cloud-orchestrator-plugins-not-pictured) -Calico Enterprise label selectors support a number of operators, which can be combined into larger expressions using the boolean operators and parentheses. +Translates the orchestrator APIs for managing networks to the Calico Enterprise data-model and datastore. -| Expression | Meaning | -| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| **Logical operators** | | -| `( )` | Matches if and only if `` matches. (Parentheses are used for grouping expressions.) | -| `! ` | Matches if and only if `` does not match. **Tip:** `!` is a special character at the start of a YAML string, if you need to use `!` at the start of a YAML string, enclose the string in quotes. | -| ` && ` | "And": matches if and only if both ``, and, `` matches | -| ` \|\| ` | "Or": matches if and only if either ``, or, `` matches. | -| **Match operators** | | -| `all()` | Match all in-scope resources. To match *no* resources, combine this operator with `!` to form `!all()`. | -| `global()` | Match all non-namespaced resources. Useful in a `namespaceSelector` to select global resources such as global network sets. | -| `k == 'v'` | Matches resources with the label 'k' and value 'v'. | -| `k != 'v'` | Matches resources without label 'k' or with label 'k' and value *not* equal to `v` | -| `has(k)` | Matches resources with label 'k', independent of value. To match pods that do not have label `k`, combine this operator with `!` to form `!has(k)` | -| `k in { 'v1', 'v2' }` | Matches resources with label 'k' and value in the given set | -| `k not in { 'v1', 'v2' }` | Matches resources without label 'k' or with label 'k' and value *not* in the given set | -| `k contains 's'` | Matches resources with label 'k' and value containing the substring 's' | -| `k starts with 's'` | Matches resources with label 'k' and value starting with the substring 's' | -| `k ends with 's'` | Matches resources with label 'k' and value ending with the substring 's' | +For cloud providers, Calico Enterprise has a separate plugin for each major cloud orchestration platform. This allows Calico Enterprise to tightly bind to the orchestrator, so users can manage the Calico Enterprise network using their orchestrator tools. When required, the orchestrator plugin provides feedback from the Calico Enterprise network to the orchestrator. For example, providing information about Felix liveness, and marking specific endpoints as failed if network setup fails. -Operators have the following precedence: +## Calico Enterprise components[​](#calico-enterprise-components-1) -- **Highest**: all the match operators -- Parentheses `( ... )` -- Negation with `!` -- Conjunction with `&&` -- **Lowest**: Disjunction with `||` +### calicoq[​](#calicoq) -For example, the expression +**Main task**: A command line tool for policy inspection to ensure policies are configured as intended. For example, you can determine which endpoints a selector or policy matches, or which policies apply to an endpoint. Requires a separate installation. [calicoq](https://docs.tigera.io/calico-enterprise/latest/reference/clis/calicoq/). -```text -! has(my-label) || my-label starts with 'prod' && role in {'frontend','business'} -``` +### Compliance[​](#compliance) -Would be "bracketed" like this: +**Main task**: Generates compliance reports for the Kubernetes cluster. Report are based on archived flow and audit logs for Calico Enterprise resources, plus any audit logs you’ve configured for Kubernetes resources in the Kubernetes API server. Compliance reports provide the following high-level information: -```text -((!(has(my-label)) || ((my-label starts with 'prod') && (role in {'frontend','business'})) -``` +- Protection + + - Endpoints explicitly protected using ingress or egress policy -It would match: +- Policies and services -- Any resource that did not have label "my-label". + -- Any resource that both: + - Policies and services associated with endpoints + - Policy audit logs +- Traffic + - Allowed ingress/egress traffic to/from namespaces, and to/from the internet - - Has a value for `my-label` that starts with "prod", and, - - Has a role label with value either "frontend", or "business". +Compliance is comprised of these components: -### FederatedServicesController[​](#federatedservicescontroller) +**compliance-snapshotter** -The federated services controller syncs Kubernetes services from remote clusters defined through [RemoteClusterConfigurations](https://docs.tigera.io/calico-enterprise/latest/reference/resources/remoteclusterconfiguration). +Handles listing of required Kubernetes and Calico Enterprise configuration and pushes snapshots to Elasticsearch. Snapshots give you visibility into configuration changes, and how the cluster-wide configuration has evolved within a reporting interval. -| Field | Description | Schema | Default | -| ---------------- | --------------------------------------------------------------------- | ------------------------------------------------------------- | ------- | -| reconcilerPeriod | Period to perform reconciliation with the Calico Enterprise datastore | [Duration string](https://golang.org/pkg/time/#ParseDuration) | 5m | +**compliance-reporter** -### LoadBalancerController[​](#loadbalancercontroller) +Handles report generation. Reads configuration history from Elasticsearch and determines time evolution of cluster-wide configuration, including relationships between policies, endpoints, services and networksets. Data is then passed through a zero-trust aggregator to determine the "worst-case outliers" in the reporting interval. -The load balancer controller manages IPAM for Services of type LoadBalancer. +**compliance-controller** -| Field | Description | Accepted Values | Schema | Default | -| --------- | ---------------------------------------------- | ---------------------------------- | ------ | ----------- | -| assignIPs | Mode in which LoadBalancer controller operates | AllServices, RequestedServicesOnly | String | AllServices | +Reads report configuration, and manages creation, deletion, and monitoring of report generation jobs. -## Supported operations[​](#supported-operations) +**compliance-server** -| Datastore type | Create | Delete (Global `default`) | Update | Get/List | Notes | -| --------------------- | ------ | ------------------------- | ------ | -------- | ----- | -| Kubernetes API server | Yes | Yes | Yes | Yes | | +Provides the API for listing, downloading, and rendering reports, and RBAC by performing authentication and authorization through the Kubernetes API server. RBAC is determined from the users RBAC for the GlobalReportType and GlobalReport resources. -### Managed Cluster +**compliance-benchmarker** -A Managed Cluster resource (`ManagedCluster`) represents a cluster managed by a centralized management plane with a shared Elasticsearch. The management plane provides central control of the managed cluster and stores its logs. +A daemonset that runs checks in the CIS Kubernetes Benchmark on each node so you can see if Kubernetes is securely deployed. -Calico Enterprise supports connecting multiple Calico Enterprise clusters as describe in the \[Multi-cluster management] installation guide. +### Linseed API and ES gateway[​](#linseed-api-and-es-gateway) -For `kubectl` commands, the following case-insensitive aliases may be used to specify the resource type on the CLI: `managedcluster`,`managedclusters`, `managedcluster.projectcalico.org`, `managedclusters.projectcalico.org` as well as abbreviations such as `managedcluster.p` and `managedclusters.p`. +The Linseed API uses mTLS to connect to clients, and provides an API to access Elasticsearch data. The ES gateway proxies requests to Elasticsearch, and provides backwards-compatibility for managed clusters that run versions before 3.17. -## Sample YAML[​](#sample-yaml) +### Intrusion detection[​](#intrusion-detection) -```yaml -apiVersion: projectcalico.org/v3 +**Main task**: Consists of a controller that handles integrations with threat intelligence feeds and Calico Enterprise custom alerts, and an installer that installs the Kibana dashboards for viewing jobs through the Kibana UI. -kind: ManagedCluster +### kube-controllers[​](#kube-controllers) -metadata: +**Main task**: Monitors the Kubernetes API and performs actions based on cluster state. The Calico Enterprise kube-controllers container includes these controllers: - name: managed-cluster +- Node +- Service +- Federated services +- Authorization +- Managed cluster (for management clusters only) -spec: +### Manager[​](#manager) - operatorNamespace: tigera-operator -``` +**Main task**: Provides network traffic visibility, centralized multi-cluster management, threat-defense troubleshooting, policy lifecycle management, and compliance using a browser-based UI for multiple roles/stakeholders. [Manager](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#manager). -## Managed cluster definition[​](#managed-cluster-definition) +### Packet capture API[​](#packet-capture-api) -### Metadata[​](#metadata) +**Main task**: Retrieves capture files (pcap format) generated by a packet capture for use with network protocol analysis tools like Wireshark. The packet capture feature is installed by default in all cluster types. Packet capture data is visible in the web console, service graph. -| Field | Description | Accepted Values | Schema | -| ----- | --------------------------------------------------------- | --------------------------------------------------- | ------ | -| name | Unique name to describe this resource instance. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | +### Prometheus API service[​](#prometheus-api-service) -- `cluster` is a reserved name for the management plane and is considered an invalid value +**Main task**: A proxy querying service that checks a user’s token RBAC to validate its scope and forwards the query to the Prometheus monitoring component. -### Spec[​](#spec) +## Bundled third-party components[​](#bundled-third-party-components-1) -| Field | Description | Accepted Values | Schema | Default | -| -------------------- | ----------------------------------------------------------------------------------------------------------------- | --------------- | ------ | ------- | -| installationManifest | Installation Manifest to be applied on a managed cluster infrastructure | None | string | `Empty` | -| operatorNamespace | The namespace of the managed cluster's operator. This value is used in the generation of the InstallationManifest | None | string | `Empty` | +### Elasticsearch and Kibana[​](#elasticsearch-and-kibana) -- `installationManifest` field can be retrieved only once at creation time. Updates are not supported for this field. +**Main task**: Built-in third-party search-engine and visualization dashboard, which provide logs for visibility into workloads, to troubleshoot Kubernetes clusters. Installed and configured by default. [Elasticsearch](https://docs.tigera.io/calico-enterprise/latest/observability/). -To extract the installation manifest at creation time `-o jsonpath="{.spec.installationManifest}"` parameters can be used with a `kubectl` command. +### fluentd[​](#fluentd) -### Status[​](#status) +**Main task**: Collects and forwards Calico Enterprise logs (flows, DNS, L7) to Elasticsearch. Open source data collector for unified logging. [fluentd open source](https://www.fluentd.org/). -Status represents the latest observed status of Managed cluster. The `status` is read-only for users and updated by the Calico Enterprise components. +### Prometheus[​](#prometheus) -| Field | Description | Schema | -| ---------- | -------------------------------------------------------------------------- | -------------------------------------- | -| conditions | List of condition that describe the current status of the Managed cluster. | List of ManagedClusterStatusConditions | +**Main task**: The default monitoring component for collecting Calico Enterprise policy metrics. It can also be used to collect metrics on calico/nodes from Felix. Prometheus is an open-source toolkit for systems monitoring and alerting. [Prometheus metrics](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/prometheus), and [Configure Prometheus](https://docs.tigera.io/calico-enterprise/latest/operations/monitor/). -**ManagedClusterStatusConditions** +## Calico open-source components[​](#calico-open-source-components-1) -Conditions represent the latest observed set of conditions for a Managed cluster. The connection between a management plane and managed plane will be reported as following: +### API server[​](#api-server) -- `Unknown` when no initial connection has been established -- `True` when both planes have an established connection -- `False` when neither planes have an established connection +**Main task**: Allows users to manage Calico Enterprise resources such as policies and tiers through `kubectl` or the Kubernetes API. `kubectl` has significant advantages over `calicoctl` including: audit logging, RBAC using Kubernetes Roles and RoleBindings, and not needing to provide privileged Kubernetes CRD access to anyone who needs to manage resources. [API server](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#apiserver). -| Field | Description | Accepted Values | Schema | Default | -| ------ | ------------------------------------------------------------------------- | -------------------------- | ------ | ------------------------- | -| type | Type of status that is being reported | - | string | `ManagedClusterConnected` | -| status | Status of the connection between a Managed cluster and management cluster | `Unknown`, `True`, `False` | string | `Unknown` | +### BIRD[​](#bird) -[Multi-cluster management](https://docs.tigera.io/calico-enterprise/latest/multicluster/set-up-multi-cluster-management/standard-install/create-a-management-cluster) +**Main task**: Gets routes from Felix and distributes to BGP peers on the network for inter-host routing. Runs on each node that hosts a Felix agent. Open source, internet routing daemon. [BIRD](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/configuration#content-main). -### Network policy +The BGP client is responsible for: - +- **Route distribution** - + When Felix inserts routes into the Linux kernel FIB, the BGP client distributes them to other nodes in the deployment. This ensures efficient traffic routing for the deployment. - +- **BGP route reflector configuration** - + BGP route reflectors are often configured for large deployments rather than a standard BGP client. (Standard BGP requires that every BGP client be connected to every other BGP client in a mesh topology, which is difficult to maintain.) For redundancy, you can seamlessly deploy multiple BGP route reflectors. Note that BGP route reflectors are involved only in control of the network: endpoint data does not passes through them. When the Calico Enterprise BGP client advertises routes from its FIB to the route reflector, the route reflector advertises those routes to the other nodes in the deployment. - +### calicoctl[​](#calicoctl) - +**Main task**: Command line interface used largely during pre-installation for CRUD operations on Calico Enterprise objects. `kubectl` is the recommended CLI for CRUD operations. calicoctl is available on any host with network access to the Calico Enterprise datastore as either a binary or a container. Requires separate installation. [calicoctl](https://docs.tigera.io/calico-enterprise/latest/reference/clis/calicoctl/)). - +### calico-node[​](#calico-node) - +**Main task**: Bundles key components that are required for networking containers with Calico Enterprise: - +- Felix +- BIRD +- confd -A network policy resource (`NetworkPolicy`) represents an ordered set of rules which are applied to a collection of endpoints that match a [label selector](#selector). +The calico repository contains the Dockerfile for calico-node, along with various configuration files to configure and “glue” these components together. In addition, we use runit for logging (svlogd) and init (runsv) services. [calico-node](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/configuration). -`NetworkPolicy` is a namespaced resource. `NetworkPolicy` in a specific namespace only applies to [workload endpoint resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint) in that namespace. Two resources are in the same namespace if the `namespace` value is set the same on both. See [global network policy resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy) for non-namespaced network policy. +### CNI plugin[​](#cni-plugin) -`NetworkPolicy` resources can be used to define network connectivity rules between groups of Calico Enterprise endpoints and host endpoints, and take precedence over [profile resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) if any are defined. +**Main task**: Provides Calico Enterprise networking for Kubernetes clusters. -NetworkPolicies are organized into [tiers](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), which provide an additional layer of ordering—in particular, note that the `Pass` action skips to the next [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), to enable hierarchical security policy. +The Calico CNI plugin allows you to use Calico networking for any orchestrator that makes use of the CNI networking specification. The Calico binary that presents this API to Kubernetes is called the CNI plugin, and must be installed on every node in the Kubernetes cluster. Configured through the standard [CNI configuration mechanism](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration), and [Calico CNI plugin](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration). -For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `networkpolicy.projectcalico.org`, `networkpolicies.projectcalico.org` and abbreviations such as `networkpolicy.p` and `networkpolicies.p`. +### confd[​](#confd) -## Sample YAML[​](#sample-yaml) +**Main task**: Monitors Calico Enterprise datastore for changes to BGP configuration and global defaults such as AS number, logging levels, and IPAM information. An open source, lightweight configuration management tool. -This sample policy allows TCP traffic from `frontend` endpoints to port 6379 on `database` endpoints. +Confd dynamically generates BIRD configuration files based on the updates to data in the datastore. When the configuration file changes, confd triggers BIRD to load the new files. [Configure confd](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/configuration#content-main), and [confd project](https://github.com/kelseyhightower/confd). -```yaml -apiVersion: projectcalico.org/v3 +### Datastore plugin[​](#datastore-plugin) -kind: NetworkPolicy +**Main task**: The datastore for the Calico Enterprise CNI plugin. The Kubernetes API datastore: -metadata: +- Is simple to manage because it does not require an extra datastore +- Uses Kubernetes RBAC to control access to Calico resources +- Uses Kubernetes audit logging to generate audit logs of changes to Calico Enterprise resources - name: internal-access.allow-tcp-6379 +### Felix[​](#felix) - namespace: production +**Main task**: Programs routes and ACLs, and anything else required on the host to provide desired connectivity for the endpoints on that host. Runs on each machine that hosts endpoints. Runs as an agent daemon. [Felix resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/felixconfig). -spec: +Depending on the specific orchestrator environment, Felix is responsible for: - tier: internal-access +- **Interface management** - selector: role == 'database' + Programs information about interfaces into the kernel so the kernel can correctly handle the traffic from that endpoint. In particular, it ensures that the host responds to ARP requests from each workload with the MAC of the host, and enables IP forwarding for interfaces that it manages. It also monitors interfaces to ensure that the programming is applied at the appropriate time. - types: +- **Route programming** - - Ingress + Programs routes to the endpoints on its host into the Linux kernel FIB (Forwarding Information Base). This ensures that packets destined for those endpoints that arrive on at the host are forwarded accordingly. - - Egress +- **ACL programming** - ingress: + Programs ACLs into the Linux kernel to ensure that only valid traffic can be sent between endpoints, and that endpoints cannot circumvent Calico Enterprise security measures. - - action: Allow +- **State reporting** - metadata: + Provides network health data. In particular, it reports errors and problems when configuring its host. This data is written to the datastore so it visible to other components and operators of the network. - annotations: +> **SECONDARY:** `node` can be run in *policy only mode* where Felix runs without BIRD and confd. This provides policy management without route distribution between hosts, and is used for deployments like managed cloud providers. - from: frontend +### IPAM plugin[​](#ipam-plugin) - to: database +**Main task**: Uses Calico Enterprise’s IP pool resource to control how IP addresses are allocated to pods within the cluster. It is the default plugin used by most Calico Enterprise installations. It is one of the Calico Enterprise [CNI plugins](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration). - protocol: TCP +### Typha[​](#typha) - source: +**Main task**: Increases scale by reducing each node’s impact on the datastore. Runs as a daemon between the datastore and instances of Felix. Installed by default, but not configured. [Typha description](https://github.com/projectcalico/typha), and [Typha component](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/typha/). - selector: role == 'frontend' +Typha maintains a single datastore connection on behalf of all of its clients like Felix and confd. It caches the datastore state and deduplicates events so that they can be fanned out to many listeners. Because one Typha instance can support hundreds of Felix instances, it reduces the load on the datastore by a large factor. And because Typha can filter out updates that are not relevant to Felix, it also reduces Felix’s CPU usage. In a high-scale (100+ node) Kubernetes cluster, this is essential because the number of updates generated by the API server scales with the number of nodes. - destination: +## Kubernetes components[​](#kubernetes-components-1) - ports: +### Kubernetes API server[​](#kubernetes-api-server) - - 6379 +**Main task**: A Kubernetes component that validates and configures data for the API objects (for example, pods, services, and others). Proxies requests for Calico Enterprise API resources to the Kubernetes API server through an aggregation layer. - egress: +### kubectl[​](#kubectl) - - action: Allow +**Main task**: The recommended command line interface for CRUD operations on Calico Enterprise and Calico objects. [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/). + +### 'The Calico Enterprise data path: IP routing and iptables' + +One of Calico Enterprise’s key features is how packets flow between workloads in a data center, or between a workload and the Internet, without additional encapsulation. + +In the Calico Enterprise approach, IP packets to or from a workload are routed and firewalled by the Linux routing table and iptables or eBPF infrastructure on the workload’s host. For a workload that is sending packets, Calico Enterprise ensures that the host is always returned as the next hop MAC address regardless of whatever routing the workload itself might configure. For packets addressed to a workload, the last IP hop is that from the destination workload’s host to the workload itself. + +![Calico datapath](https://docs.tigera.io/assets/images/calico-datapath-164f6f29c7b21889c1d4b517a2695533.png) + +Suppose that IPv4 addresses for the workloads are allocated from a datacenter-private subnet of 10.65/16, and that the hosts have IP addresses from 172.18.203/24. If you look at the routing table on a host: + +```bash +route -n ``` -## Definition[​](#definition) +You will see something like this: -### Metadata[​](#metadata) +```text +Kernel IP routing table -| Field | Description | Accepted Values | Schema | Default | -| --------- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ | --------- | -| name | The name of the network policy. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | | -| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" | +Destination Gateway Genmask Flags Metric Ref Use Iface -### Spec[​](#spec) +0.0.0.0 172.18.203.1 0.0.0.0 UG 0 0 0 eth0 -| Field | Description | Accepted Values | Schema | Default | -| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------- | --------------------- | --------------------------------------------- | -| order | Controls the order of precedence. Calico Enterprise applies the policy with the lowest value first. | | float | | -| tier | Name of the [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier) this policy belongs to. | | string | `default` | -| selector | Selects the endpoints to which this policy applies. | | [selector](#selector) | all() | -| types | Applies the policy based on the direction of the traffic. To apply the policy to inbound traffic, set to `Ingress`. To apply the policy to outbound traffic, set to `Egress`. To apply the policy to both, set to `Ingress, Egress`. | `Ingress`, `Egress` | List of strings | Depends on presence of ingress/egress rules\* | -| ingress | Ordered list of ingress rules applied by policy. | | List of [Rule](#rule) | | -| egress | Ordered list of egress rules applied by this policy. | | List of [Rule](#rule) | | -| serviceAccountSelector | Selects the service account(s) to which this policy applies. Select a specific service account by name using the `projectcalico.org/name` label. | | [selector](#selector) | all() | -| performanceHints | Contains a list of hints to Calico's policy engine to help process the policy more efficiently. Hints never change the enforcement behaviour of the policy. The available hints are described [below](#performance-hints). | `AssumeNeededOnEveryNode` | List of strings | | +10.65.0.0 0.0.0.0 255.255.0.0 U 0 0 0 ns-db03ab89-b4 -\* If `types` has no value, Calico Enterprise defaults as follows. +10.65.0.21 172.18.203.126 255.255.255.255 UGH 0 0 0 eth0 -> | Ingress Rules Present | Egress Rules Present | `Types` value | -> | --------------------- | -------------------- | ----------------- | -> | No | No | `Ingress` | -> | Yes | No | `Ingress` | -> | No | Yes | `Egress` | -> | Yes | Yes | `Ingress, Egress` | +10.65.0.22 172.18.203.129 255.255.255.255 UGH 0 0 0 eth0 -### Rule[​](#rule) +10.65.0.23 172.18.203.129 255.255.255.255 UGH 0 0 0 eth0 -A single rule matches a set of packets and applies some action to them. When multiple rules are specified, they are executed in order. +10.65.0.24 0.0.0.0 255.255.255.255 UH 0 0 0 tapa429fb36-04 -| Field | Description | Accepted Values | Schema | Default | -| ----------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------ | ----------------------------- | ------- | -| metadata | Per-rule metadata. | | [RuleMetadata](#rulemetadata) | | -| action | Action to perform when matching this rule. | `Allow`, `Deny`, `Log`, `Pass` | string | | -| protocol | Positive protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | -| notProtocol | Negative protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | -| icmp | ICMP match criteria. | | [ICMP](#icmp) | | -| notICMP | Negative match on ICMP. | | [ICMP](#icmp) | | -| ipVersion | Positive IP version match. | `4`, `6` | integer | | -| source | Source match parameters. | | [EntityRule](#entityrule) | | -| destination | Destination match parameters. | | [EntityRule](#entityrule) | | -| http | Match HTTP request parameters. Application layer policy must be enabled to use this field. | | [HTTPMatch](#httpmatch) | | +172.18.203.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 +``` -After a `Log` action, processing continues with the next rule; `Allow` and `Deny` are immediate and final and no further rules are processed. +There is one workload on this host with IP address 10.65.0.24, and accessible from the host via a TAP (or veth, etc.) interface named tapa429fb36-04. Hence there is a direct route for 10.65.0.24, through tapa429fb36-04. Other workloads, with the .21, .22 and .23 addresses, are hosted on two other hosts (172.18.203.126 and .129), so the routes for those workload addresses are via those hosts. -An `action` of `Pass` in a `NetworkPolicy` or `GlobalNetworkPolicy` will skip over the remaining policies and jump to the first [profile](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) assigned to the endpoint, applying the policy configured in the profile; if there are no Profiles configured for the endpoint the default applied action is `Deny`. +The direct routes are set up by a Calico Enterprise agent named Felix when it is asked to provision connectivity for a particular workload. A BGP client (such as BIRD) then notices those and distributes them – perhaps via a route reflector – to BGP clients running on other hosts, and hence the indirect routes appear also. -### RuleMetadata[​](#rulemetadata) +## Is that all?[​](#is-that-all) -Metadata associated with a specific rule (rather than the policy as a whole). The contents of the metadata does not affect how a rule is interpreted or enforced; it is simply a way to store additional information for use by operators or applications that interact with Calico Enterprise. +As far as the static data path is concerned, yes. It’s just a combination of responding to workload ARP requests with the host MAC, IP routing and iptables or eBPF. There’s a great deal more to Calico Enterprise in terms of how the required routing and security information is managed, and for handling dynamic things such as workload migration – but the basic data path really is that simple. -| Field | Description | Schema | Default | -| ----------- | ----------------------------------- | ----------------------- | ------- | -| annotations | Arbitrary non-identifying metadata. | map of string to string | | +### Network design -Example: + -```yaml -metadata: +## [📄️Calico over Ethernet fabrics](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric) - annotations: +[Understand the interconnect fabric options in a Calico network.](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric) - app: database +## [📄️Calico over IP fabrics](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l3-interconnect-fabric) - owner: devops -``` +[Understand considerations for implementing interconnect fabrics with Calico.](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l3-interconnect-fabric) -Annotations follow the [same rules as Kubernetes for valid syntax and character set](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set). +### Calico over Ethernet fabrics -On Linux with the iptables data plane, rule annotations are rendered as comments in the form `-m comment --comment "="` on the iptables rule(s) that correspond to the Calico Enterprise rule. +Any technology that is capable of transporting IP packets can be used as the interconnect fabric in a Calico Enterprise network. This means that the standard tools used to transport IP, such as MPLS and Ethernet can be used in a Calico Enterprise network. -### ICMP[​](#icmp) +The focus of this article is on Ethernet as the interconnect network. Most at-scale cloud operators have converted to IP fabrics, and that infrastructure will work for Calico Enterprise as well. However, the concerns that drove most of those operators to IP as the interconnection network in their pods are largely ameliorated by Calico Enterprise, allowing Ethernet to be viably considered as a Calico Enterprise interconnect, even in large-scale deployments. -| Field | Description | Accepted Values | Schema | Default | -| ----- | ------------------- | -------------------- | ------- | ------- | -| type | Match on ICMP type. | Can be integer 0-254 | integer | | -| code | Match on ICMP code. | Can be integer 0-255 | integer | | +## Concerns over Ethernet at scale[​](#concerns-over-ethernet-at-scale) -### EntityRule[​](#entityrule) +It has been acknowledged by the industry for years that, beyond a certain size, classical Ethernet networks are unsuitable for production deployment. Although there have been [multiple](https://en.wikipedia.org/wiki/Provider_Backbone_Bridge_Traffic_Engineering) [attempts](https://web.archive.org/web/20150923231827/https://www.cisco.com/web/about/ac123/ac147/archived_issues/ipj_14-3/143_trill.html) [to address](https://en.wikipedia.org/wiki/Virtual_Private_LAN_Service) these issues, the scale-out networking community has largely abandoned Ethernet for anything other than providing physical point-to-point links in the networking fabric. The principle reasons for Ethernet failures at large scale are: -Entity rules specify the attributes of the source or destination of a packet that must match for the rule as a whole to match. Packets can be matched on combinations of: +- Large numbers of *endpoints* ([note 1](#note-1)) -- Identity of the source/destination, by using [Selectors](#selectors) or by specifying a particular Kubernetes `Service`. Selectors can match [workload endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint), [host endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint) and ([namespaced](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkset) or [global](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset)) network sets. -- Source/destination IP address, protocol and port. + Each switch in an Ethernet network must learn the path to all Ethernet endpoints that are connected to the Ethernet network. Learning this amount of state can become a substantial task when we are talking about hundreds of thousands of *endpoints*. -If the rule contains multiple match criteria (for example, an IP and a port) then all match criteria must match for the rule as a whole to match a packet. +- High rate of *churn* or change in the network -| Field | Description | Accepted Values | Schema | Default | -| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------------------------------------------- | ------- | -| nets | Match packets with IP in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs | | -| notNets | Negative match on CIDRs. Match packets with IP not in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs | | -| selector | Positive match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | | -| notSelector | Negative match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | | -| namespaceSelector | Positive match on selected namespaces. If specified, only workload endpoints in the selected Kubernetes namespaces are matched. Matches namespaces based on the labels that have been applied to the namespaces. Defines the scope that selectors will apply to, if not defined then selectors apply to the NetworkPolicy's namespace. Match a specific namespace by name using the `projectcalico.org/name` label. Select the non-namespaced resources like GlobalNetworkSet(s), host endpoints to which this policy applies by using `global()` selector. | Valid selector | [selector](#selector) | | -| ports | Positive match on the specified ports | | list of [ports](#ports) | | -| domains | Positive match on [domain names](#exact-and-wildcard-domain-names). | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list of strings | | -| notPorts | Negative match on the specified ports | | list of [ports](#ports) | | -| serviceAccounts | Match endpoints running under service accounts. If a `namespaceSelector` is also defined, the set of service accounts this applies to is limited to the service accounts in the selected namespaces. | | [ServiceAccountMatch](#serviceaccountmatch) | | -| services | Match the specified service(s). If specified on egress rule destinations, no other selection criteria can be set. If specified on ingress rule sources, only positive or negative matches on ports can be specified. | | [ServiceMatch](#servicematch) | | + With that many endpoints, most of them being ephemeral (such as virtual machines or containers), there is a large amount of *churn* in the network. That load of re-learning paths can be a substantial burden on the control plane processor of most Ethernet switches. -> **SECONDARY:** You cannot mix IPv4 and IPv6 CIDRs in a single rule using `nets` or `notNets`. If you need to match both, create 2 rules. +- High volumes of broadcast traffic -#### Selector performance in EntityRules[​](#selector-performance-in-entityrules) + As each node on the Ethernet network must use Broadcast packets to locate peers, and many use broadcast for other purposes, the resultant packet replication to each and every endpoint can lead to *broadcast storms* in large Ethernet networks, effectively consuming most, if not all resources in the network and the attached endpoints. -When rendering policy into the data plane, Calico Enterprise must identify the endpoints that match the selectors in all active rules. This calculation is optimized for certain common selector types. Using the optimized selector types reduces CPU usage (and policy rendering time) by orders of magnitude. This becomes important at high scale (hundreds of active rules, hundreds of thousands of endpoints). +- Spanning tree -The optimized operators are as follows: + Spanning tree is the protocol used to keep an Ethernet network from forming loops. The protocol was designed in the era of smaller, simpler networks, and it has not aged well. As the number of links and interconnects in an Ethernet network goes up, many implementations of spanning tree become more *fragile*. Unfortunately, when spanning tree fails in an Ethernet network, the effect is a catastrophic loop or partition (or both) in the network, and, in most cases, difficult to troubleshoot or resolve. -- `label == "value"` -- `label in { 'v1', 'v2' }` -- `has(label)` -- ` && ` is optimized if **either** `` or `` is optimized. +Although many of these issues are crippling at *VM scale* (tens of thousands of endpoints that live for hours, days, weeks), they will be absolutely lethal at *container scale* (hundreds of thousands of endpoints that live for seconds, minutes, days). -The following perform like `has(label)`. All endpoints with the label will be scanned to find matches: +If you weren't ready to turn off your Ethernet data center network before this, I bet you are now. Before you do, however, let's look at how Calico Enterprise can mitigate these issues, even in very large deployments. -- `label contains 's'` -- `label starts with 's'` -- `label ends with 's'` +## How does Calico Enterprise tame the Ethernet daemons?[​](#how-does-calico-enterprise-tame-the-ethernet-daemons) -The other operators, and in particular, `all()`, `!`, `||` and `!=` are not optimized. +First, let's look at how Calico Enterprise uses an Ethernet interconnect fabric. It's important to remember that an Ethernet network *sees* nothing on the other side of an attached IP router, the Ethernet network just *sees* the router itself. This is why Ethernet switches can be used at Internet peering points, where large fractions of Internet traffic is exchanged. The switches only see the routers from the various ISPs, not those ISPs' customers' nodes. We leverage the same effect in Calico Enterprise. -Examples: +To take the issues outlined above, let's revisit them in a Calico Enterprise context. -- `a == 'b'` - optimized -- `a == 'b' && has(c)` - optimized -- `a == 'b' || has(c)` - **not** optimized due to use of `||` -- `c != 'd'` - **not** optimized due to use of `!=` -- `!has(a)` - **not** optimized due to use of `!` -- `a == 'b' && c != 'd'` - optimized, `a =='b'` is optimized so `a == 'b' && ` is optimized. -- `c != 'd' && a == 'b'` - optimized, `a =='b'` is optimized so ` && a == 'b'` is optimized. +- Large numbers of endpoints -### Exact and wildcard domain names[​](#exact-and-wildcard-domain-names) + In a Calico Enterprise network, the Ethernet interconnect fabric only sees the routers/compute servers, not the endpoint. In a standard cloud model, where there is tens of VMs per server (or hundreds of containers), this reduces the number of nodes that the Ethernet sees (and has to learn) by one to two orders of magnitude. Even in very large pods (say twenty thousand servers), the Ethernet network would still only see a few tens of thousands of endpoints. Well within the scale of any competent data center Ethernet top of rack (ToR) switch. -The `domains` field is only valid for egress Allow rules. It restricts the rule to apply only to traffic to one of the specified domains. If this field is specified, the parent [Rule](#rule)'s `action` must be `Allow`, and `nets` and `selector` must both be left empty. +- High rate of churn -When a configured domain name has no wildcard (`*`), it matches exactly that domain name. For example: + In a classical Ethernet data center fabric, there is a *churn* event each time an endpoint is created, destroyed, or moved. In a large data center, with hundreds of thousands of endpoints, this *churn* could run into tens of events per second, every second of the day, with peaks easily in the hundreds or thousands of events per second. In a Calico Enterprise network, however, the *churn* is very low. The only event that would lead to *churn* orders of magnitude more than what is normally experienced), there would only be two thousand events per **day**. Any switch that cannot handle that volume of change in the network should not be used for any application. -- `microsoft.com` -- `tigera.io` +- High volume of broadcast traffic -With a single asterisk in any part of the domain name, it matches 1 or more path components at that position. For example: + Because the first (and last) hop for any traffic in a Calico Enterprise network is an IP hop, and IP hops terminate broadcast traffic, there is no endpoint broadcast network in the Ethernet fabric, period. In fact, the only broadcast traffic that should be seen in the Ethernet fabric is the ARPs of the compute servers locating each other. If the traffic pattern is fairly consistent, the steady-state ARP rate should be almost zero. Even in a pathological case, the ARP rate should be well within normal accepted boundaries. -- `*.google.com` matches `www.google.com` and `www.ipv6.google.com`, but not `google.com` -- `www.*.com` matches `www.sun.com` and `www.apple.com`, but not `www.com` -- `update.*.mycompany.com` matches `update.tools.mycompany.com`, `update.secure.suite.mycompany.com`, and so on +- Spanning tree -**Not** supported are: + Depending on the architecture chosen for the Ethernet fabric, it may even be possible to turn off spanning tree. However, even if it is left on, due to the reduction in node count, and reduction in churn, most competent spanning tree implementations should be able to handle the load without stress. -- Multiple wildcards in the same domain, for example: `*.*.mycompany.com` -- Asterisks that are not the entire component, for example: `www.g*.com` -- A wildcard as the last component, for example: `www.mycompany.*` -- More general wildcards, such as regular expressions +With these considerations in mind, it should be evident that an Ethernet connection fabric in Calico Enterprise is not only possible, it is practical and should be seriously considered as the interconnect fabric for a Calico Enterprise network. -> **SECONDARY:** Calico Enterprise implements policy for domain names by learning the corresponding IPs from DNS, then programming rules to allow those IPs. This means that if multiple domain names A, B and C all map to the same IP, and there is domain-based policy to allow A, traffic to B and C will be allowed as well. +As mentioned in the IP fabric post, an IP fabric is also quite feasible for Calico Enterprise, but there are more considerations that must be taken into account. The Ethernet fabric option has fewer architectural considerations in its design. -### Selector[​](#selector) +## A brief note about Ethernet topology[​](#a-brief-note-about-ethernet-topology) -A label selector is an expression which either matches or does not match a resource based on its labels. +As mentioned elsewhere in the Calico Enterprise documentation, because Calico Enterprise can use most of the standard IP tooling, some interesting options regarding fabric topology become possible. -Calico Enterprise label selectors support a number of operators, which can be combined into larger expressions using the boolean operators and parentheses. +We assume that an Ethernet fabric for Calico Enterprise would most likely be constructed as a *leaf/spine* architecture. Other options are possible, but the *leaf/spine* is the predominant architectural model in use in scale-out infrastructure today. -| Expression | Meaning | -| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| **Logical operators** | | -| `( )` | Matches if and only if `` matches. (Parentheses are used for grouping expressions.) | -| `! ` | Matches if and only if `` does not match. **Tip:** `!` is a special character at the start of a YAML string, if you need to use `!` at the start of a YAML string, enclose the string in quotes. | -| ` && ` | "And": matches if and only if both ``, and, `` matches | -| ` \|\| ` | "Or": matches if and only if either ``, or, `` matches. | -| **Match operators** | | -| `all()` | Match all in-scope resources. To match *no* resources, combine this operator with `!` to form `!all()`. | -| `global()` | Match all non-namespaced resources. Useful in a `namespaceSelector` to select global resources such as global network sets. | -| `k == 'v'` | Matches resources with the label 'k' and value 'v'. | -| `k != 'v'` | Matches resources without label 'k' or with label 'k' and value *not* equal to `v` | -| `has(k)` | Matches resources with label 'k', independent of value. To match pods that do not have label `k`, combine this operator with `!` to form `!has(k)` | -| `k in { 'v1', 'v2' }` | Matches resources with label 'k' and value in the given set | -| `k not in { 'v1', 'v2' }` | Matches resources without label 'k' or with label 'k' and value *not* in the given set | -| `k contains 's'` | Matches resources with label 'k' and value containing the substring 's' | -| `k starts with 's'` | Matches resources with label 'k' and value starting with the substring 's' | -| `k ends with 's'` | Matches resources with label 'k' and value ending with the substring 's' | +Because Calico Enterprise is an IP routed fabric, a Calico Enterprise network can use [ECMP](https://en.wikipedia.org/wiki/Equal-cost_multi-path_routing) to distribute traffic across multiple links (instead of using Ethernet techniques such as MLAG). By leveraging ECMP load balancing on the Calico Enterprise compute servers, it is possible to build the fabric out of multiple *independent* leaf/spine planes using no technologies other than IP routing in the Calico Enterprise nodes, and basic Ethernet switching in the interconnect fabric. These planes would operate completely independently and could be designed such that they would not share a fault domain. This would allow for the catastrophic failure of one (or more) plane(s) of Ethernet interconnect fabric without the loss of the pod (the failure would just decrease the amount of interconnect bandwidth in the pod). This is a gentler failure mode than the pod-wide IP or Ethernet failure that is possible with today's designs. -Operators have the following precedence: +You might find this [Facebook blog post](https://engineering.fb.com/2014/11/14/production-engineering/introducing-data-center-fabric-the-next-generation-facebook-data-center-network/) on their fabric approach interesting. A graphic to visualize the idea is shown below. -- **Highest**: all the match operators -- Parentheses `( ... )` -- Negation with `!` -- Conjunction with `&&` -- **Lowest**: Disjunction with `||` +![Ethernet spine planes](https://docs.tigera.io/assets/images/l2-spine-planes-d1685acaabb4c4a56f5b79d9932f8796.png) -For example, the expression +The diagram does not show the endpoints in this diagram, and the endpoints would be unaware of anything in the fabric (as noted above). -```text -! has(my-label) || my-label starts with 'prod' && role in {'frontend','business'} -``` +In this diagram, each ToR is segmented into four logical switches (possibly by using 'port VLANs'), ([note 2](#note-2)) and each compute server has a connection to each of those logical switches. We will identify those logical switches by their color. Each ToR would then have a blue, green, orange, and red logical switch. Those 'colors' would be members of a given *plane*, so there would be a blue plane, a green plane, an orange plane, and a red plane. Each plane would have a dedicated spine switch. and each ToR in a given spine would be connected to its spine, and only its spine. -Would be "bracketed" like this: +Each plane would constitute an IP network, so the blue plane would be 2001:db8:1000::/36, the green would be 2001:db8:2000::/36, and the orange and red planes would be 2001:db8:3000::/36 and 2001:db8:4000::/36 respectively ([note 3](#note-3)). -```text -((!(has(my-label)) || ((my-label starts with 'prod') && (role in {'frontend','business'})) -``` +Each IP network (plane) requires its own BGP route reflectors. Those route reflectors need to be peered with each other within the plane, but the route reflectors in each plane do not need to be peered with one another. Therefore, a fabric of four planes would have four route reflector meshes. Each compute server, border router, *etc.* would need to be a route reflector client of at least one route reflector in each plane, and very preferably two or more in each plane. -It would match: +The following diagram visualizes the route reflector environment. -- Any resource that did not have label "my-label". +![route-reflector](https://docs.tigera.io/assets/images/l2-rr-spine-planes-d10ad67fe16f2c08329e0baf80f213fc.png) -- Any resource that both: +These route reflectors could be dedicated hardware connected to the spine switches (or the spine switches themselves), or physical or virtual route reflectors connected to the necessary logical leaf switches (blue, green, orange, and red). That may be a route reflector running on a compute server and connected directly to the correct plane link, and not routed through the vRouter, to avoid the chicken and egg problem that would occur if the route reflector were "behind" the Calico Enterprise network. - +Other physical and logical configurations and counts are, of course, possible, this is just an example. - - Has a value for `my-label` that starts with "prod", and, - - Has a role label with value either "frontend", or "business". +The logical configuration would then have each compute server would have an address on each plane's subnet, and announce its endpoints on each subnet. If ECMP is then turned on, the compute servers would distribute the load across all planes. -Understanding scopes and the `all()` and `global()` operators: selectors have a scope of resources that they are matched against, which depends on the context in which they are used. For example: +If a plane were to fail (say due to a spanning tree failure), then only that one plane would fail. The remaining planes would stay running. -- The `nodeSelector` in an `IPPool` selects over `Node` resources. +### Footnotes[​](#footnotes) -- The top-level selector in a `NetworkPolicy` selects over the workloads *in the same namespace* as the `NetworkPolicy`. +### Note 1[​](#note-1) -- The top-level selector in a `GlobalNetworkPolicy` doesn't have the same restriction, it selects over all endpoints including namespaced `WorkloadEndpoint`s and non-namespaced `HostEndpoint`s. +In this document (and in all Calico Enterprise documents) we tend to use the term *endpoint* to refer to a virtual machine, container, appliance, bare metal server, or any other entity that is connected to a Calico Enterprise network. If we are referring to a specific type of endpoint, we will call that out (such as referring to the behavior of VMs as distinct from containers). -- The `namespaceSelector` in a `NetworkPolicy` (or `GlobalNetworkPolicy`) *rule* selects over the labels on namespaces rather than workloads. +### Note 2[​](#note-2) -- The `namespaceSelector` determines the scope of the accompanying `selector` in the entity rule. If no `namespaceSelector` is present then the rule's `selector` matches the default scope for that type of policy. (This is the same namespace for `NetworkPolicy` and all endpoints/network sets for `GlobalNetworkPolicy`) +We are using logical switches in this example. Physical ToRs could also be used, or a mix of the two (say 2 logical switches hosted on each physical switch). -- The `global()` operator can be used (only) in a `namespaceSelector` to change the scope of the main `selector` to include non-namespaced resources such as [GlobalNetworkSet](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset). This allows namespaced `NetworkPolicy` resources to refer to global non-namespaced resources, which would otherwise be impossible. +### Note 3[​](#note-3) -### Ports[​](#ports) +We use IPv6 here purely as an example. IPv4 would be configured similarly. -Calico Enterprise supports the following syntaxes for expressing ports. +### Calico over IP fabrics -| Syntax | Example | Description | -| --------- | ---------- | ------------------------------------------------------------------- | -| int | 80 | The exact (numeric) port specified | -| start:end | 6040:6050 | All (numeric) ports within the range start ≤ x ≤ end | -| string | named-port | A named port, as defined in the ports list of one or more endpoints | +Calico Enterprise provides an end-to-end IP network that interconnects the endpoints ([note 1](#note-1)) in a scale-out or cloud environment. To do that, it needs an *interconnect fabric* to provide the physical networking layer on which Calico Enterprise operates ([note 2](#note-2)). -An individual numeric port may be specified as a YAML/JSON integer. A port range or named port must be represented as a string. For example, this would be a valid list of ports: +Although Calico Enterprise is designed to work with any underlying interconnect fabric that can support IP traffic, the fabric that has the least considerations attached to its implementation is an Ethernet fabric as discussed in [Calico over Ethernet fabrics](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric). -```yaml -ports: [8080, '1234:5678', 'named-port'] -``` +In most cases, the Ethernet fabric is the appropriate choice, but there are infrastructures where L3 (an IP fabric) has already been deployed, or will be deployed, and it makes sense for Calico Enterprise to operate in those environments. -#### Named ports[​](#named-ports) +However, because Calico Enterprise is, itself, a routed infrastructure, there are more engineering, architecture, and operations considerations that have to be weighed when running Calico Enterprise with an IP routed interconnection fabric. We will briefly outline those in the rest of this post. That said, Calico Enterprise operates equally well with Ethernet or IP interconnect fabrics. -Using a named port in an `EntityRule`, instead of a numeric port, gives a layer of indirection, allowing for the named port to map to different numeric values for each endpoint. +## Background[​](#background) -For example, suppose you have multiple HTTP servers running as workloads; some exposing their HTTP port on port 80 and others on port 8080. In each workload, you could create a named port called `http-port` that maps to the correct local port. Then, in a rule, you could refer to the name `http-port` instead of writing a different rule for each type of server. +### Basic Calico Enterprise architecture overview[​](#basic-calico-enterprise-architecture-overview) -> **SECONDARY:** Since each named port may refer to many endpoints (and Calico Enterprise has to expand a named port into a set of endpoint/port combinations), using a named port is considerably more expensive in terms of CPU than using a simple numeric port. We recommend that they are used sparingly, only where the extra indirection is required. +A description of the Calico Enterprise architecture can be found in our [architectural overview](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/overview). However, a brief discussion of the routing and data paths is useful for the discussion. -### ServiceAccountMatch[​](#serviceaccountmatch) +In a Calico Enterprise network, each compute server acts as a router for all of the endpoints that are hosted on that compute server. We call that function a vRouter. The data path is provided by the Linux kernel, the control plane by a BGP protocol server, and management plane by Calico Enterprise's on-server agent, *Felix*. -A ServiceAccountMatch matches service accounts in an EntityRule. +Each endpoint can only communicate through its local vRouter, and the first and last *hop* in any Calico Enterprise packet flow is an IP router hop through a vRouter. Each vRouter announces all of the endpoints it is attached to all the other vRouters and other routers on the infrastructure fabric, using BGP, usually with BGP route reflectors to increase scale. A discussion of why we use BGP can be found in [Why BGP?](https://www.tigera.io/blog/why-bgp/). -| Field | Description | Schema | -| -------- | ------------------------------- | --------------------- | -| names | Match service accounts by name | list of strings | -| selector | Match service accounts by label | [selector](#selector) | +Access control lists (ACLs) enforce security (and other) policy as directed by whatever cloud orchestrator is in use. There are other components in the Calico Enterprise architecture, but they are irrelevant to the interconnect network fabric discussion. -### ServiceMatch[​](#servicematch) +### Overview of current common IP scale-out fabric architectures[​](#overview-of-current-common-ip-scale-out-fabric-architectures) -A ServiceMatch matches a service in an EntityRule. +There are two approaches to building an IP fabric for a scale-out infrastructure. However, all of them, to date, have assumed that the edge router in the infrastructure is the top of rack (TOR) switch. In the Calico Enterprise model, that function is pushed to the compute server itself. -| Field | Description | Schema | -| --------- | ------------------------ | ------ | -| name | The service's name. | string | -| namespace | The service's namespace. | string | +The two approaches are: -### Performance Hints[​](#performance-hints) +**Routing infrastructure is based on some form of IGP** -Performance hints provide a way to tell Calico Enterprise about the intended use of the policy so that it may process it more efficiently. Currently only one hint is defined: +Due to the limitations in scale of IGP networks, the Calico Enterprise team does not believe that using an IGP to distribute endpoint reachability information will adequately scale in a Calico Enterprise environment. However, it is possible to use a combination of IGP and BGP in the interconnect fabric, where an IGP communicates the path to the *next-hop* router (in Calico Enterprise, this is often the destination compute server) and BGP is used to distribute the actual next-hop for a given endpoint. This is a valid model, and, in fact is the most common approach in a widely distributed IP network (say a carrier's backbone network). The design of these networks is somewhat complex though, and will not be addressed further in this article. ([note 3](#note-3)). -- `AssumeNeededOnEveryNode`: normally, Calico Enterprise only calculates a policy's rules and selectors on nodes where the policy is actually in use (i.e. its selector matches a local endpoint). This saves work in most cases. The `AssumeNeededOnEveryNode` hint tells Calico Enterprise to treat the policy as "in use" on *every* node. This is useful for large policy sets that are known to apply to all (or nearly all) endpoints. It effectively "preloads" the policy on every node so that there is less work to do when the first endpoint matching the policy shows up. It also prevents work from being done to tear down the policy when the last endpoint is drained. +**Routing infrastructure is based entirely on BGP** -## Application layer policy[​](#application-layer-policy) +In this model, the IP network is "tight enough" or has a small enough diameter that BGP can be used to distribute endpoint routes, and the paths to the next-hops for those routes is known to all of the routers in the network (in a Calico Enterprise network this includes the compute servers). This is the network model that this note will address. -Application layer policy is an optional feature of Calico Enterprise and [must be enabled](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp) to use the following match criteria. +In this article, we will cover the second option because it is more common in the scale-out world. -> **SECONDARY:** Application layer policy match criteria are supported with the following restrictions. -> -> - Only ingress policy is supported. Egress policy must not contain any application layer policy match clauses. -> - Rules must have the action `Allow` if they contain application layer policy match clauses. +### BGP-only interconnect fabrics[​](#bgp-only-interconnect-fabrics) -### HTTPMatch[​](#httpmatch) +There are multiple methods to build a BGP-only interconnect fabric. We will focus on three models, each with two widely viable variations. There are other options, and we will briefly touch on why we didn't include some of them in [Other Options](#other-options). -An HTTPMatch matches attributes of an HTTP request. The presence of an HTTPMatch clause on a Rule will cause that rule to only match HTTP traffic. Other application layer protocols will not match the rule. +The two methods are: -Example: +- A BGP fabric where each of the TOR switches (and their subsidiary compute servers) are a unique [Autonomous System (AS)](https://en.wikipedia.org/wiki/Autonomous_System_\(Internet\)) and they are interconnected via either an Ethernet switching plane provided by the spine switches in a [leaf/spine](http://bradhedlund.com/2012/10/24/video-a-basic-introduction-to-the-leafspine-data-center-networking-fabric-design/) architecture, or via a set of spine switches, each of which is also a unique AS. We'll refer to this as the *AS per rack* model. This model is detailed in [IETF RFC 7938](https://datatracker.ietf.org/doc/html/rfc7938). -```yaml -http: +- A BGP fabric where each of the compute servers is a unique AS, and the TOR switches make up a transit AS. We'll refer to this as the *AS per server* model. - methods: ['GET', 'PUT'] +Each of these models can either have an Ethernet or IP spine. In the case of an Ethernet spine, each spine switch provides an isolated Ethernet connection *plane* as in the Calico Enterprise Ethernet interconnect fabric model and each TOR switch is connected to each spine switch. - paths: +Another model is where each spine switch is a unique AS, and each TOR switch BGP peers with each spine switch. In both cases, the TOR switches use ECMP to load-balance traffic between all available spine switches. - - exact: '/projects/calico' +### BGP network design considerations[​](#bgp-network-design-considerations) - - prefix: '/users' -``` +Contrary to popular opinion, BGP is actually a fairly simple protocol. For example, the BGP configuration on a Calico Enterprise compute server is approximately sixty lines long, not counting comments. The perceived complexity is due to the things that you can *do* with BGP. Many uses of BGP involve complex policy rules, where the behavior of BGP can be modified to meet technical (or business, financial, political, etc.) requirements. A default Calico Enterprise network does not venture into those areas, ([note 4](#note-4)) and therefore is fairly straight forward. -| Field | Description | Schema | -| ------- | -------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | -| methods | Match HTTP methods. Case sensitive. [Standard HTTP method descriptions.](https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html) | list of strings | -| paths | Match HTTP paths. Case sensitive. | list of [HTTPPathMatch](#httppathmatch) | +That said, there are a few design rules for BGP that need to be kept in mind when designing an IP fabric that will interconnect nodes in a Calico Enterprise network. These BGP design requirements *can* be worked around, if necessary, but doing so takes the designer out of the standard BGP *envelope* and should only be done by an implementer who is *very* comfortable with advanced BGP design. -### HTTPPathMatch[​](#httppathmatch) +These considerations are: -| Syntax | Example | Description | -| ------ | ------------------- | ------------------------------------------------------------------------------- | -| exact | `exact: "/foo/bar"` | Matches the exact path as written, not including the query string or fragments. | -| prefix | `prefix: "/keys"` | Matches any path that begins with the given prefix. | +- AS continuity or *AS puddling* -## Supported operations[​](#supported-operations) + Any router in an AS *must* be able to communicate with any other router in that same AS without transiting another AS. -| Datastore type | Create/Delete | Update | Get/List | Notes | -| ------------------------ | ------------- | ------ | -------- | ----- | -| Kubernetes API datastore | Yes | Yes | Yes | | +- Next hop behavior -#### List filtering on tiers[​](#list-filtering-on-tiers) + By default BGP routers do not change the *next hop* of a route if it is peering with another router in its same AS. The inverse is also true, a BGP router will set itself as the *next hop* of a route if it is peering with a router in another AS. -List and watch operations may specify label selectors or field selectors to filter `NetworkPolicy` resources on tiers returned by the API server. When no selector is specified, the API server returns all `NetworkPolicy` resources from all tiers that the user has access to. +- Route reflection -##### Field selector[​](#field-selector) + All BGP routers in a given AS must *peer* with all the other routers in that AS. This is referred to a *complete BGP mesh*. This can become problematic as the number of routers in the AS scales up. The use of *route reflectors* reduce the need for the complete BGP mesh. However, route reflectors also have scaling considerations. -When using the field selector, supported operators are `=` and `==` +- Endpoints -The following example shows how to retrieve all `NetworkPolicy` resources in the default tier and in all namespaces: + In a Calico Enterprise network, each endpoint is a route. Hardware networking platforms are constrained by the number of routes they can learn. This is usually in range of 10,000's or 100,000's of routes. Route aggregation can help, but that is usually dependent on the capabilities of the scheduler used by the orchestration software (*e.g.* OpenStack). -```bash -kubectl get networkpolicy.p --field-selector spec.tier=default --all-namespaces -``` +A deeper discussion of these considerations can be found in the [IP Fabric Design Considerations](#ip-fabric-design-considerations). -##### Label selector[​](#label-selector) +The designs discussed below address these considerations. -When using the label selector, supported operators are `=`, `==` and `IN`. +### The AS Per Rack model[​](#the-as-per-rack-model) -The following example shows how to retrieve all `NetworkPolicy` resources in the `default` and `net-sec` tiers and in all namespaces: +This model is the closest to the model suggested by [IETF RFC 7938](https://datatracker.ietf.org/doc/html/rfc7938). -```bash -kubectl get networkpolicy.p -l 'projectcalico.org/tier in (default, net-sec)' --all-namespaces -``` +As mentioned earlier, there are two versions of this model, one with an set of Ethernet planes interconnecting the ToR switches, and the other where the core planes are also routers. The following diagrams may be useful for the discussion. -### Network set +![Diagram showing the AS per rack model with ToR switches meshed via Ethernet switching planes at the spine layer](https://docs.tigera.io/assets/images/l3-fabric-diagrams-as-rack-l2-spine-586a942656c4718cae0d17e78de81a15.png) - +The diagram above shows the **AS per rack model** where the ToR switches are physically meshed via a set of Ethernet switching planes. -A network set resource (NetworkSet) represents an arbitrary set of IP subnetworks/CIDRs, allowing it to be matched by Calico Enterprise policy. Network sets are useful for applying policy to traffic coming from (or going to) external, non-Calico Enterprise, networks. +![Diagram showing the AS per rack model with ToR switches meshed via discrete BGP spine routers, each in their own AS](https://docs.tigera.io/assets/images/l3-fabric-diagrams-as-rack-l3-spine-731d38ec8419d6e7a50a6ee9a610bdf1.png) -`NetworkSet` is a namespaced resource. `NetworkSets` in a specific namespace only applies to [network policies](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) in that namespace. Two resources are in the same namespace if the `namespace` value is set the same on both. (See [GlobalNetworkSet](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset) for non-namespaced network sets.) +The diagram above shows the **AS per rack model** where the ToR switches are physically meshed via a set of discrete BGP spine routers, each in their own AS. -The metadata for each network set includes a set of labels. When Calico Enterprise is calculating the set of IPs that should match a source/destination selector within a [network policy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) rule, it includes the CIDRs from any network sets that match the selector. +In this approach, every ToR-ToR or ToR-Spine (in the case of an AS per spine) link is an eBGP peering which means that there is no route-reflection possible (using standard BGP route reflectors) *north* of the ToR switches. -> **SECONDARY:** Since Calico Enterprise matches packets based on their source/destination IP addresses, Calico Enterprise rules may not behave as expected if there is NAT between the Calico Enterprise-enabled node and the networks listed in a network set. For example, in Kubernetes, incoming traffic via a service IP is typically SNATed by the kube-proxy before reaching the destination host so Calico Enterprise's workload policy will see the kube-proxy's host's IP as the source instead of the real source. +If the L2 spine option is used, the result of this is that each ToR must either peer with every other ToR switch in the cluster (which could be hundreds of peers). -## Sample YAML[​](#sample-yaml) +If the AS per spine option is used, then each ToR only has to peer with each spine (there are usually somewhere between two and sixteen spine switches in a pod). However, the spine switches must peer with all ToR switches (again, that would be hundreds, but most spine switches have more control plane capacity than the average ToR, so this might be more scalable in many circumstances). -```yaml -apiVersion: projectcalico.org/v3 +Within the rack, the configuration is the same for both variants, and is somewhat different than the configuration north of the ToR. -kind: NetworkSet +Every router within the rack, which, in the case of Calico Enterprise is every compute server, shares the same AS as the ToR that they are connected to. That connection is in the form of an Ethernet switching layer. Each router in the rack must be directly connected to enable the AS to remain contiguous. The ToR's *router* function is then connected to that Ethernet switching layer as well. The actual configuration of this is dependent on the ToR in use, but usually it means that the ports that are connected to the compute servers are treated as *subnet* or *segment* ports, and then the ToR's *router* function has a single interface into that subnet. -metadata: +This configuration allows each compute server to connect to each other compute server in the rack without going through the ToR router, but it will, of course, go through the ToR switching function. The compute servers and the ToR router could all be directly meshed, or a route reflector could be used within the rack, either hosted on the ToR itself, or as a virtual function hosted on one or more compute servers within the rack. - name: external-database +The ToR, as the eBGP router redistributes all of the routes from other ToRs as well as routes external to the data center to the compute servers that are in its AS, and announces all of the routes from within the AS (rack) to the other ToRs and the larger world. This means that each compute server will see the ToR as the next hop for all external routes, and the individual compute servers are the next hop for all routes internal to the rack. - namespace: staging +### The AS per Compute Server model[​](#the-as-per-compute-server-model) - labels: +This model takes the concept of an AS per rack to its logical conclusion. In the earlier referenced [IETF RFC 7938](https://datatracker.ietf.org/doc/html/rfc7938) the assumption in the overall model is that the ToR is first tier aggregating and routing element. In Calico Enterprise, the ToR, if it is an L3 router, is actually the second tier. Remember, in Calico Enterprise, the compute server is always the first/last router for an endpoint, and is also the first/last point of aggregation. - role: db +Therefore, if we follow the architecture of the draft, the compute server, not the ToR should be the AS boundary. The differences can be seen in the following two diagrams. -spec: +![Diagram showing the AS per compute server model with ToR switches meshed via Ethernet switching planes at the spine layer](https://docs.tigera.io/assets/images/l3-fabric-diagrams-as-server-l2-spine-ef320fdea22b2f69da6211d3731a3c32.png) - nets: +The diagram above shows the *AS per compute server model* where the ToR switches are physically meshed via a set of Ethernet switching planes. - - 198.51.100.0/28 +![Diagram showing the AS per compute server model with ToR switches connected to independent routing planes at the spine layer](https://docs.tigera.io/assets/images/l3-fabric-diagrams-as-server-l3-spine-0515c7852f8f7aaf4d550012ff10b5fe.png) - - 203.0.113.0/24 +The diagram above shows the *AS per compute server model* where the ToR switches are physically connected to a set of independent routing planes. - allowedEgressDomains: +As can be seen in these diagrams, there are still the same two variants as in the *AS per rack* model, one where the spine switches provide a set of independent Ethernet planes to interconnect the ToR switches, and the other where that is done by a set of independent routers. - - db.com +The real difference in this model, is that the compute servers as well as the ToR switches are all independent autonomous systems. To make this work at scale, the use of four byte AS numbers as discussed in [RFC 4893](http://www.faqs.org/rfcs/rfc4893.html). Without using four byte AS numbering, the total number of ToRs and compute servers in a Calico Enterprise fabric would be limited to the approximately five thousand available private AS ([note 5](#note-5)) numbers. If four byte AS numbers are used, there are approximately ninety-two million private AS numbers available. This should be sufficient for any given Calico Enterprise fabric. - - '*.db.com' -``` +The other difference in this model *vs.* the AS per rack model, is that there are no route reflectors used, as all BGP peerings are eBGP. In this case, each compute server in a given rack peers with its ToR switch which is also acting as an eBGP router. For two servers within the same rack to communicate, they will be routed through the ToR. Therefore, each server will have one peering to each ToR it is connected to, and each ToR will have a peering with each compute server that it is connected to (normally, all the compute servers in the rack). -## Network set definition[​](#network-set-definition) +The inter-ToR connectivity considerations are the same in scale and scope as in the AS per rack model. -### Metadata[​](#metadata) +### The Downward Default model[​](#the-downward-default-model) -| Field | Description | Accepted Values | Schema | Default | -| --------- | ------------------------------------------------------------------ | ------------------------------------------------- | ------ | --------- | -| name | The name of this network set. Required. | Lower-case alphanumeric with optional `_` or `-`. | string | | -| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" | -| labels | A set of labels to apply to this endpoint. | | map | | +The final model is a bit different. Whereas, in the previous models, all of the routers in the infrastructure carry full routing tables, and leave their AS paths intact, this model ([note 6](#note-6)) removes the AS numbers at each stage of the routing path. This is to prevent routes from other nodes in the network from not being installed due to it coming from the *local* AS (since they share the source and dest of the route share the same AS). -### Spec[​](#spec) +The following diagram will show the AS relationships in this model. -| Field | Description | Accepted Values | Schema | Default | -| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------- | ------ | ------- | -| nets | The IP networks/CIDRs to include in the set. | Valid IPv4 or IPv6 CIDRs, for example "192.0.2.128/25" | list | | -| allowedEgressDomains | The list of domain names that belong to this set and are honored in egress allow rules only. Domain names specified here only work to allow egress traffic from the cluster to external destinations. They don't work to *deny* traffic to destinations specified by domain name, or to allow ingress traffic from *sources* specified by domain name. | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list | | +![Diagram showing the downward default model where all Calico nodes share one AS and all ToR switches share another, with spine routers announcing default routes downward](https://docs.tigera.io/assets/images/l3-fabric-downward-default-30bce2fe705b14f16d7381cf5612a81c.png) -### Exact and wildcard domain names[​](#exact-and-wildcard-domain-names) +In the diagram above, we are showing that all Calico Enterprise nodes share the same AS number, as do all ToR switches. However, those ASs are different (*A1* is not the same network as *A2*, even though the both share the same AS number *A* ). -When a configured domain name has no wildcard (`*`), it matches exactly that domain name. For example: +Although the use of a single AS for all ToR switches, and another for all compute servers simplifies deployment (standardized configuration), the real benefit comes in the offloading of the routing tables in the ToR switches. -- `microsoft.com` -- `tigera.io` +In this model, each router announces all of its routes to its upstream peer (the Calico Enterprise routers to their ToR, the ToRs to the spine switches). However, in return, the upstream router only announces a default route. In this case, a given Calico Enterprise router only has routes for the endpoints that are locally hosted on it, as well as the default from the ToR. Because the ToR is the only route for the Calico Enterprise network the rest of the network, this matches reality. The same happens between the ToR switches and the spine. This means that the ToR only has to install the routes that are for endpoints that are hosted on its downstream Calico Enterprise nodes. Even if we were to host 200 endpoints per Calico Enterprise node, and stuff 80 Calico Enterprise nodes in each rack, that would still limit the routing table on the ToR to a maximum of 16,000 entries (well within the capabilities of even the most modest of switches). -With a single asterisk in any part of the domain name, it matches 1 or more path components at that position. For example: +Because the default is originated by the Spine (originally) there is no chance for a downward announced route to originate from the recipient's AS, preventing the **AS puddling** problem. -- `*.google.com` matches `www.google.com` and `www.ipv6.google.com`, but not `google.com` -- `www.*.com` matches `www.sun.com` and `www.apple.com`, but not `www.com` -- `update.*.mycompany.com` matches `update.tools.mycompany.com`, `update.secure.suite.mycompany.com`, and so on +There is one (minor) drawback to this model, in that all traffic that is destined for an invalid destination (the destination IP does not exist) will be forwarded to the spine switches before they are dropped. -**Not** supported are: +It should also be noted that the spine switches do need to carry all of the Calico Enterprise network routes, just as they do in the routed spines in the previous examples. In short, this model imposes no more load on the spines than they already would have, and substantially reduces the amount of routing table space used on the ToR switches. It also reduces the number of routes in the Calico Enterprise nodes, but, as we have discussed before, that is not a concern in most deployments as the amount of memory consumed by a full routing table in Calico Enterprise is a fraction of the total memory available on a modern compute server. -- Multiple wildcards in the same domain, for example: `*.*.mycompany.com` -- Asterisks that are not the entire component, for example: `www.g*.com` -- A wildcard as the last component, for example: `www.mycompany.*` -- More general wildcards, such as regular expressions +## Recommendation[​](#recommendation) -### Node +The Calico Enterprise team recommends the use of the [AS per rack](#the-as-per-rack-model) model if the resultant routing table size can be accommodated by the ToR and spine switches, remembering to account for projected growth. -A node resource (`Node`) represents a node running Calico Enterprise. When adding a host to a Calico Enterprise cluster, a node resource needs to be created which contains the configuration for the `node` instance running on the host. +If there is concern about the route table size in the ToR switches, the Calico Enterprise recommends the [Downward Default](#the-downward-default-model) model. -When starting a `node` instance, the name supplied to the instance should match the name configured in the Node resource. +If there are concerns about both the spine and ToR switch route table capacity, or there is a desire to run a very simple L2 fabric to connect the Calico Enterprise nodes, then the user should consider the Ethernet fabric as detailed in [Calico over Ethernet fabrics](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric). -By default, starting a `node` instance will automatically create a node resource using the `hostname` of the compute host. +If you are interested in the AS per compute server, the Calico Enterprise team would be very interested in discussing the deployment of that model. -This resource is not supported in `kubectl`. +## Other options[​](#other-options) -## Sample YAML[​](#sample-yaml) +The way the physical and logical connectivity is laid out in this article, and the [Ethernet fabric](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric), the next hop router for a given route is always directly connected to the router receiving that route. This makes the need for another protocol to distribute the next hop routes unnecessary. -```yaml -apiVersion: projectcalico.org/v3 +However, in many (or most) WAN BGP networks, the routers within a given AS may not be directly adjacent. Therefore, a router may receive a route with a next hop address that it is not directly adjacent to. In those cases, an IGP, such as OSPF or IS-IS, is used by the routers within a given AS to determine the path to the BGP next hop route. -kind: Node +There may be Calico Enterprise architectures where there are similar models where the routers within a given AS are not directly adjacent. In those models, the use of an IGP in Calico Enterprise may be warranted. The configuration of those protocols are, however, beyond the scope of this technical note. -metadata: +### IP fabric design considerations[​](#ip-fabric-design-considerations) - name: node-hostname +**AS puddling** -spec: +The first consideration is that an AS must be kept contiguous. This means that any two nodes in a given AS must be able to communicate without traversing any other AS. If this rule is not observed, the effect is often referred to as *AS puddling* and the network will *not* function correctly. - bgp: +A corollary of that rule is that any two administrative regions that share the same AS number, are in the same AS, even if that was not the desire of the designer. BGP has no way of identifying if an AS is local or foreign other than the AS number. Therefore re-use of an AS number for two *networks* that are not directly connected, but only connected through another *network* or AS number will not work without a lot of policy changes to the BGP routers. - asNumber: 64512 +Another corollary of that rule is that a BGP router will not propagate a route to a peer if the route has an AS in its path that is the same AS as the peer. This prevents loops from forming in the network. The effect of this prevents two routers in the same AS from transiting another router (either in that AS or not). - ipv4Address: 10.244.0.1/24 +**Next hop behavior** - ipv6Address: 2001:db8:85a3::8a2e:370:7334/120 +Another consideration is based on the differences between iBGP and eBGP. BGP operates in two modes, if two routers are BGP peers, but share the same AS number, then they are considered to be in an *internal* BGP (or iBGP) peering relationship. If they are members of different AS's, then they are in an *external* or eBGP relationship. - ipv4IPIPTunnelAddr: 192.168.0.1 -``` +BGP's original design model was that all BGP routers within a given AS would know how to get to one another (via static routes, IGP ([note 7](#note-7)) routing protocols, or the like), and that routers in different ASs would not know how to reach one another unless they were directly connected. -## Definition[​](#definition) +Based on that design point, routers in an iBGP peering relationship assume that they do not transit traffic for other iBGP routers in a given AS (i.e. A can communicate with C, and therefore will not need to route through B), and therefore, do not change the *next hop* attribute in BGP ([note 8](#note-8)). -### Metadata[​](#metadata) +A router with an eBGP peering, on the other hand, assumes that its eBGP peer will not know how to reach the next hop route, and then will substitute its own address in the next hop field. This is often referred to as *next hop self*. -| Field | Description | Accepted Values | Schema | -| ----- | -------------------------------- | --------------------------------------------------- | ------ | -| name | The name of this node. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | +In the Calico Enterprise [Ethernet fabric](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric) model, all of the compute servers (the routers in a Calico Enterprise network) are directly connected over one or more Ethernet network(s) and therefore are directly reachable. In this case, a router in the Calico Enterprise network does not need to set *next hop self* within the Calico Enterprise fabric. -### Spec[​](#spec) +The models we present in this article ensure that all routes that may traverse a non-Calico Enterprise router are eBGP routes, and therefore *next hop self* is automatically set correctly. If a deployment of Calico Enterprise in an IP interconnect fabric does not satisfy that constraint, then *next hop self* must be appropriately configured. -| Field | Description | Accepted Values | Schema | Default | -| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | ---------------------------- | ------- | -| bgp | BGP configuration for this node. Omit if using Calico Enterprise for policy only. | | [BGP](#bgp) | | -| ipv4VXLANTunnelAddr | IPv4 address of the VXLAN tunnel. This is system configured and should not be updated manually. | | string | | -| ipv6VXLANTunnelAddr | IPv6 address of the VXLAN tunnel. This is system configured and should not be updated manually. | | string | | -| vxlanTunnelMACAddr | MAC address of the VXLAN tunnel. This is system configured and should not be updated manually. | | string | | -| vxlanTunnelMACAddrV6 | MAC address of the IPv6 VXLAN tunnel. This is system configured and should not be updated manually. | | string | | -| orchRefs | Correlates this node to a node in another orchestrator. | | list of [OrchRefs](#orchref) | | -| wireguard | WireGuard configuration for this node. This is applicable only if WireGuard is enabled in [Felix Configuration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/felixconfig). | | [WireGuard](#wireguard) | | +**Route reflection** -### OrchRef[​](#orchref) +As mentioned above, BGP expects that all of the iBGP routers in a network can see (and speak) directly to one another, this is referred to as a *BGP full mesh*. In small networks this is not a problem, but it does become interesting as the number of routers increases. For example, if you have 99 BGP routers in an AS and wish to add one more, you would have to configure the peering to that new router on each of the 99 existing routers. Not only is this a problem at configuration time, it means that each router is maintaining 100 protocol adjacencies, which can start being a drain on constrained resources in a router. While this might be *interesting* at 100 routers, it becomes an impossible task with 1000's or 10,000's of routers (the potential size of a Calico Enterprise network). -| Field | Description | Accepted Values | Schema | Default | -| ------------ | ------------------------------------------------ | --------------- | ------ | ------- | -| nodeName | Name of this node according to the orchestrator. | | string | | -| orchestrator | Name of the orchestrator. | k8s | string | | +Conveniently, large scale/Internet scale networks solved this problem almost 20 years ago by deploying BGP route reflection as described in [RFC 1966](http://www.faqs.org/rfcs/rfc1966.html). This is a technique supported by almost all BGP routers today. In a large network, a number of route reflectors ([note 9](#note-9)) are evenly distributed and each iBGProuter is *peered* with one or more route reflectors (usually 2 or 3). Each route reflector can handle 10's or 100's of route reflector clients (in Calico Enterprise's case, the compute server), depending on the route reflector being used. Those route reflectors are, in turn, peered with each other. This means that there are an order of magnitude less route reflectors that need to be completely meshed, and each route reflector client is only configured to peer to 2 or 3 route reflectors. This is much easier to manage. -### BGP[​](#bgp) +Other route reflector architectures are possible, but those are beyond the scope of this document. -| Field | Description | Accepted Values | Schema | Default | -| ----------------------- | ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | ------- | -| asNumber | The AS Number of your `node`. | Optional. If omitted the global value is used (see [example modifying Global BGP settings](https://docs.tigera.io/calico-enterprise/latest/networking/configuring/bgp) for details about modifying the `asNumber` setting). | integer | | -| ipv4Address | The IPv4 address and subnet exported as the next-hop for the Calico Enterprise endpoints on the host | The IPv4 address must be specified if BGP is enabled. | string | | -| ipv6Address | The IPv6 address and subnet exported as the next-hop for the Calico Enterprise endpoints on the host | Optional | string | | -| ipv4IPIPTunnelAddr | IPv4 address of the IP-in-IP tunnel. This is system configured and should not be updated manually. | Optional IPv4 address | string | | -| routeReflectorClusterID | Enables this node as a route reflector within the given cluster | Optional IPv4 address | string | | +**Endpoints** -### WireGuard[​](#wireguard) +The final consideration is the number of endpoints in a Calico Enterprise network. In the [Ethernet fabric](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric) case the number of endpoints is not constrained by the interconnect fabric, as the interconnect fabric does not *see* the actual endpoints, it only *sees* the actual vRouters, or compute servers. This is not the case in an IP fabric, however. IP networks forward by using the destination IP address in the packet, which, in Calico Enterprise's case, is the destination endpoint. That means that the IP fabric nodes (ToR switches and/or spine switches, for example) must know the routes to each endpoint in the network. They learn this by participating as route reflector clients in the BGP mesh, just as the Calico Enterprise vRouter/compute server does. -| Field | Description | Accepted Values | Schema | Default | -| -------------------- | ----------------------------------------------------------------------------------------- | --------------- | ------ | ------- | -| interfaceIPv4Address | The IP address and subnet for the IPv4 WireGuard interface created by Felix on this node. | Optional | string | | -| interfaceIPv6Address | The IP address and subnet for the IPv6 WireGuard interface created by Felix on this node. | Optional | string | | +However, unlike a compute server which has a relatively unconstrained amount of memory, a physical switch is either memory constrained, or quite expensive. This means that the physical switch has a limit on how many *routes* it can handle. The current industry standard for modern commodity switches is in the range of 128,000 routes. This means that, without other routing *tricks*, such as aggregation, a Calico Enterprise installation that uses an IP fabric will be limited to the routing table size of its constituent network hardware, with a reasonable upper limit today of 128,000 endpoints. -## Supported operations[​](#supported-operations) +### Footnotes[​](#footnotes) + +### Note 1[​](#note-1) -| Datastore type | Create/Delete | Update | Get/List | Notes | -| --------------------- | ------------- | ------ | -------- | ----------------------------------------------------- | -| Kubernetes API server | No | Yes | Yes | `node` data is directly tied to the Kubernetes nodes. | +In Calico Enterprise's terminology, an endpoint is an IP address and interface. It could refer to a VM, a container, or even a process bound to an IP address running on a bare metal server. -### Packet capture +### Note 2[​](#note-2) - +This interconnect fabric provides the connectivity between the Calico Enterprise (v)Router (in almost all cases, the compute servers) nodes, as well as any other elements in the fabric (*e.g.* bare metal servers, border routers, and appliances). -A Packet Capture resource (`PacketCapture`) represents captured live traffic for debugging microservices and application interaction inside a Kubernetes cluster. +### Note 3[​](#note-3) -Calico Enterprise supports selecting one or multiple [WorkloadEndpoints resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint) as described in the [Packet Capture](https://docs.tigera.io/calico-enterprise/latest/observability/packetcapture) guide. +If there is interest in a discussion of this approach, please let us know. The Calico Enterprise team could either arrange a discussion, or if there was enough interest, publish a follow-up tech note. -For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `packetcapture`,`packetcaptures`, `packetcapture.projectcalico.org`, `packetcaptures.projectcalico.org` as well as abbreviations such as `packetcapture.p` and `packetcaptures.p`. +### Note 4[​](#note-4) -## Sample YAML[​](#sample-yaml) +However those tools are available if a given Calico Enterprise instance needs to utilize those policy constructs. -```yaml -apiVersion: projectcalico.org/v3 +### Note 5[​](#note-5) -kind: PacketCapture +The two byte AS space reserves approximately the last five thousand AS numbers for private use. There is no technical reason why other AS numbers could not be used. However the re-use of global scope AS numbers within a private infrastructure is strongly discouraged. The chance for routing system failure or incorrect routing is substantial, and not restricted to the entity that is doing the reuse. -metadata: +### Note 6[​](#note-6) - name: sample-capture +We first saw this design in a customer's lab, and thought it innovative enough to share (we asked them first, of course). Similar **AS Path Stripping** approaches are used in ISP networks, however. - namespace: sample-namespace +### Note 7[​](#note-7) -spec: +An Interior Gateway Protocol is a local routing protocol that does not cross an AS boundary. The primary IGPs in use today are OSPF and IS-IS. While complex iBGP networks still use IGP routing protocols, a data center is normally a fairly simple network, even if it has many routers in it. Therefore, in the data center case, the use of an IGP can often be disposed of. - selector: k8s-app == "sample-app" +### Note 8[​](#note-8) - filters: +A Next hop is an attribute of a route announced by a routing protocol. In simple terms a route is defined by a *target*, or the destination that is to be reached, and a *next hop*, which is the next router in the path to reach that target. There are many other characteristics in a route, but those are well beyond the scope of this post. - - protocol: TCP +### Note 9[​](#note-9) - ports: +A route reflector may be a physical router, a software appliance, or simply a BGP daemon. It only processes routing messages, and does not pass actual data plane traffic. However, some route reflectors are co-resident on regular routers that do pass data plane traffic. Although they may sit on one platform, the functions are distinct. - - 80 -``` +### Component resources -```yaml -apiVersion: projectcalico.org/v3 + -kind: PacketCapture +## [📄️Configuring the Calico Enterprise CNI plugins](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration) -metadata: +[Details for configuring the Calico Enterprise CNI plugins.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration) - name: sample-capture +## [📄️Configure resource requests and limits](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configure-resources) - namespace: sample-namespace +[Configure Resource requests and limits.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configure-resources) -spec: +## [🗃Calico Enterprise Kubernetes controllers](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/) - selector: all() +[2 items](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/) - startTime: '2021-08-26T12:00:00Z' +## [🗃Calico Enterprise node (cnx-node)](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/) - endTime: '2021-08-26T12:30:00Z' -``` +[2 items](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/) -## Packet capture definition[​](#packet-capture-definition) +## [🗃Typha for scaling](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/typha/) -### Metadata[​](#metadata) +[3 items](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/typha/) -| Field | Description | Accepted Values | Schema | Default | -| --------- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ | --------- | -| name | The name of the packet capture. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | | -| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" | +### Configuring the Calico Enterprise CNI plugins -### Spec[​](#spec) + -| Field | Description | Accepted Values | Schema | Default | -| --------- | ---------------------------------------------------------------------------------- | ----------------------- | ----------------------- | ------- | -| selector | Selects the endpoints to which this packet capture applies. | | [selector](#selector) | | -| filters | The ordered set of filters applied to traffic captured from an interface. | | [filters](#filters) | | -| startTime | Defines the start time from which this PacketCapture will start capturing packets. | Date in RFC 3339 format | [startTime](#starttime) | | -| endTime | Defines the end time at which this PacketCapture will stop capturing packets. | Date in RFC 3339 format | [endTime](#endtime) | | + -### Selector[​](#selector) +**Tab: Operator** -A label selector is an expression which either matches or does not match a resource based on its labels. +The Calico Enterprise CNI plugins do not need to be configured directly when installed by the operator. For a complete operator configuration reference, see [the installation API reference documentation](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). -Calico Enterprise label selectors support a number of operators, which can be combined into larger expressions using the boolean operators and parentheses. +**Tab: Manifest** -| Expression | Meaning | -| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| **Logical operators** | | -| `( )` | Matches if and only if `` matches. (Parentheses are used for grouping expressions.) | -| `! ` | Matches if and only if `` does not match. **Tip:** `!` is a special character at the start of a YAML string, if you need to use `!` at the start of a YAML string, enclose the string in quotes. | -| ` && ` | "And": matches if and only if both ``, and, `` matches | -| ` \|\| ` | "Or": matches if and only if either ``, or, `` matches. | -| **Match operators** | | -| `all()` | Match all in-scope resources. To match *no* resources, combine this operator with `!` to form `!all()`. | -| `global()` | Match all non-namespaced resources. Useful in a `namespaceSelector` to select global resources such as global network sets. | -| `k == 'v'` | Matches resources with the label 'k' and value 'v'. | -| `k != 'v'` | Matches resources without label 'k' or with label 'k' and value *not* equal to `v` | -| `has(k)` | Matches resources with label 'k', independent of value. To match pods that do not have label `k`, combine this operator with `!` to form `!has(k)` | -| `k in { 'v1', 'v2' }` | Matches resources with label 'k' and value in the given set | -| `k not in { 'v1', 'v2' }` | Matches resources without label 'k' or with label 'k' and value *not* in the given set | -| `k contains 's'` | Matches resources with label 'k' and value containing the substring 's' | -| `k starts with 's'` | Matches resources with label 'k' and value starting with the substring 's' | -| `k ends with 's'` | Matches resources with label 'k' and value ending with the substring 's' | +The Calico Enterprise CNI plugin is configured through the standard CNI [configuration mechanism](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration) -Operators have the following precedence: +A minimal configuration file that uses Calico Enterprise for networking and IPAM looks like this -- **Highest**: all the match operators -- Parentheses `( ... )` -- Negation with `!` -- Conjunction with `&&` -- **Lowest**: Disjunction with `||` +```json +{ -For example, the expression + "name": "any_name", -```text -! has(my-label) || my-label starts with 'prod' && role in {'frontend','business'} -``` + "cniVersion": "0.1.0", -Would be "bracketed" like this: + "type": "calico", -```text -((!(has(my-label)) || ((my-label starts with 'prod') && (role in {'frontend','business'})) -``` + "ipam": { -It would match: + "type": "calico-ipam" -- Any resource that did not have label "my-label". + } -- Any resource that both: +} +``` - +If the `node` container on a node registered with a `NODENAME` other than the node hostname, the CNI plugin on this node must be configured with the same `nodename`: - - Has a value for `my-label` that starts with "prod", and, - - Has a role label with value either "frontend", or "business". +```json +{ -### Filters[​](#filters) + "name": "any_name", -| Field | Description | Accepted Values | Schema | Default | -| -------- | ------------------------------------- | ------------------------------------------------------------ | ----------------- | ------- | -| protocol | Positive protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | -| ports | Positive match on the specified ports | | list of ports | | + "nodename": "", -Calico Enterprise supports the following syntax for expressing ports. + "type": "calico", -| Syntax | Example | Description | -| --------- | --------- | ---------------------------------------------------- | -| int | 80 | The exact (numeric) port specified | -| start:end | 6040:6050 | All (numeric) ports within the range start ≤ x ≤ end | + "ipam": { -An individual numeric port may be specified as a YAML/JSON integer. A port range must be represented as a string. Named ports are not supported by `PacketCapture`. Multiple ports can be defined to filter traffic. All specified ports or port ranges concatenated using the logical operator "OR". + "type": "calico-ipam" -For example, this would be a valid list of ports: + } -```yaml -ports: [8080, '1234:5678'] +} ``` -Multiple filter rules can be defined to filter traffic. All rules are concatenated using the logical operator "OR". For example, filtering both TCP or UDP traffic will be defined as: +Additional configuration can be added as detailed below. -```yaml -filters: +## Generic[​](#generic) - - protocol: TCP +### Datastore type[​](#datastore-type) - - protocol: UDP -``` +The Calico Enterprise CNI plugin supports the following datastore: -Within a single filter rule, protocol and list of valid ports will be concatenated using the logical operator "AND". +- `datastore_type` (kubernetes) -For example, filtering TCP traffic and traffic for port 80 will be defined as: +### Logging[​](#logging) -```yaml -filters: +Logging is always to `stderr`. Logs are also written to `/var/log/calico/cni/cni.log` on each host by default. - - protocol: TCP +Logging can be configured using the following options in the netconf. - ports: [80] -``` +| Option name | Default | Description | +| -------------------- | ----------------------------- | --------------------------------------------------------------------------------------------------------- | +| `log_level` | INFO | Logging level. Allowed levels are `ERROR`, `WARNING`, `INFO`, and `DEBUG`. | +| `log_file_path` | `/var/log/calico/cni/cni.log` | Location on each host to write CNI log files to. Logging to file can be disabled by removing this option. | +| `log_file_max_size` | 100 | Max file size in MB log files can reach before they are rotated. | +| `log_file_max_age` | 30 | Max age in days that old log files will be kept on the host before they are removed. | +| `log_file_max_count` | 10 | Max number of rotated log files allowed on the host before they are cleaned up. | -### StartTime[​](#starttime) +```json +{ -Defines the start time from which this PacketCapture will start capturing packets in RFC 3339 format. If omitted or the value is in the past, the capture will start immediately. If the value is changed to a future time, capture will stop immediately and restart at that time. + "name": "any_name", -```yaml -startTime: '2021-08-26T12:00:00Z' -``` + "cniVersion": "0.1.0", -### EndTime[​](#endtime) + "type": "calico", -Defines the end time from which this PacketCapture will stop capturing packets in RFC 3339 format. If omitted the capture will continue indefinitely. If the value is changed to the past, capture will stop immediately. + "log_level": "DEBUG", -```yaml -endTime: '2021-08-26T12:30:00Z' -``` + "log_file_path": "/var/log/calico/cni/cni.log", -### Status[​](#status) + "ipam": { -`PacketCaptureStatus` lists the current state of a `PacketCapture` and its generated capture files. + "type": "calico-ipam" -| Field | Description | -| ----- | -------------------------------------------------------------------------------------------------------------------------------- | -| files | It describes the location of the packet capture files that is identified via a node, its directory and the file names generated. | + } -### Files[​](#files) +} +``` -| Field | Description | -| --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| directory | The path inside the calico-node container for the generated files. | -| fileNames | The name of the generated file for a `PacketCapture` ordered alphanumerically. The active packet capture file will be identified using the following schema: `{}.pcap`. Rotated capture files name will contain an index matching the rotation timestamp. | -| node | The hostname of the Kubernetes node the files are located on. | -| state | Determines whether a PacketCapture is capturing traffic from any interface attached to the current node. Possible values include: Capturing, Scheduled, Finished, Error, WaitingForTraffic | +### IPAM[​](#ipam) -### Policy recommendation scope +When using Calico Enterprise IPAM, the following flags determine what IP addresses should be assigned. NOTE: These flags are strings and not boolean values. - +- `assign_ipv4` (default: `"true"`) +- `assign_ipv6` (default: `"false"`) - +A specific IP address can be chosen by using [`CNI_ARGS`](https://github.com/appc/cni/blob/master/SPEC.md#parameters) and setting `IP` to the desired value. - +By default, Calico Enterprise IPAM will assign IP addresses from all the available IP pools. - +Optionally, the list of possible IPv4 and IPv6 pools can also be specified via the following properties: - +- `ipv4_pools`: An array of CIDR strings or pool names. (e.g., `"ipv4_pools": ["10.0.0.0/24", "20.0.0.0/16", "default-ipv4-ippool"]`) +- `ipv6_pools`: An array of CIDR strings or pool names. (e.g., `"ipv6_pools": ["2001:db8::1/120", "namedpool"]`) - +Example CNI config: - +```json +{ -The policy recommendation scope is a collection of configuration options to control [policy recommendation](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/policy-recommendations) in the web console. + "name": "any_name", -To apply changes to this resource, use the following format: + "cniVersion": "0.1.0", -```text -$ kubectl patch policyrecommendationscope default -p '{"spec":{"":""}}' -``` + "type": "calico", -**Example** + "ipam": { -`$ kubectl patch policyrecommendationscope default -p '{"spec":{"interval":"5m"}}'` + "type": "calico-ipam", -## Definition[​](#definition) + "assign_ipv4": "true", -### + "assign_ipv6": "true", -### Metadata[​](#metadata) + "ipv4_pools": ["10.0.0.0/24", "20.0.0.0/16", "default-ipv4-ippool"], -| Field | Description | Accepted Values | Schema | Default | -| ----- | -------------------------------------------- | --------------- | ------ | ------- | -| name | The name of the policy recommendation scope. | `default` | string | | + "ipv6_pools": ["2001:db8::1/120", "default-ipv6-ippool"] -### Spec[​](#spec) + } -| Field | Description | Accepted Values | Schema | Default | -| ------------------- | ------------------------------------------------------------------------------------------------ | --------------- | ------ | -------------- | -| Interval | The frequency to create and refine policy recommendations. | | | 2.5m (minutes) | -| InitialLookback | Start time to look at flow logs when first creating a policy recommendation. | | | 24h (hours) | -| StabilizationPeriod | Time that a recommended policy should remain unchanged so it is stable and ready to be enforced. | | | 10m (minutes) | +} +``` -#### NamespaceSpec[​](#namespacespec) +> **SECONDARY:** `ipv6_pools` will be respected only when `assign_ipv6` is set to `"true"`. -| Field | Description | Accepted Values | Schema | Default | -| -------------------------------- | ------------------------------------------------------ | ---------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | -| recStatus | Defines the policy recommendation engine status. | Enabled/Disabled | | Disabled | -| selector | Selects the namespaces for generating recommendations. | | | `!(projectcalico.org/name starts with ''tigera-'') && !(projectcalico.org/name starts with ''calico-'') && !(projectcalico.org/name starts with ''kube-'')` | -| intraNamespacePassThroughTraffic | When true, sets all intra-namespace traffic to Pass | true/false | | false | +Any IP pools specified in the CNI config must have already been created. It is an error to specify IP pools in the config that do not exist. -### Profile +### Container settings[​](#container-settings) -A profile resource (`Profile`) represents a set of rules which are applied to the individual endpoints to which this profile has been assigned. +The following options allow configuration of settings within the container namespace. -Each Calico Enterprise endpoint or host endpoint can be assigned to zero or more profiles. +- allow\_ip\_forwarding (default is `false`) -This resource is not supported in `kubectl`. +```json +{ -## Sample YAML[​](#sample-yaml) + "name": "any_name", -The following sample profile applies the label `stage: development` to any endpoint that includes `dev-apps` in its list of profiles. + "cniVersion": "0.1.0", -```yaml -apiVersion: projectcalico.org/v3 + "type": "calico", -kind: Profile + "ipam": { -metadata: + "type": "calico-ipam" - name: dev-apps + }, -spec: + "container_settings": { - labelsToApply: + "allow_ip_forwarding": true - stage: development + } + +} ``` -## Definition[​](#definition) +### Readiness Gates[​](#readiness-gates) -### Metadata[​](#metadata) +The following option makes CNI plugin wait for specified endpoint(s) to be ready before configuring pod networking. -| Field | Description | Accepted Values | Schema | Default | -| ------ | ---------------------------------- | --------------------------------------------------- | ---------------------------------- | ------- | -| name | The name of the profile. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | | -| labels | A set of labels for this profile. | | map of string key to string values | | +- `readiness_gates` -### Spec[​](#spec) +This is an optional property that takes an array of URLs. Each URL specified will be polled for readiness and pod networking will continue startup once all readiness\_gates are ready. -| Field | Description | Accepted Values | Schema | Default | -| -------------------- | -------------------------------------------------------------------------------------------------------------- | --------------- | ------------------------------------------------------------------------------------------------------ | ------- | -| ingress (deprecated) | The ingress rules belonging to this profile. | | List of [Rule](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy#rule) | | -| egress (deprecated) | The egress rules belonging to this profile. | | List of [Rule](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy#rule) | | -| labelsToApply | An optional set of labels to apply to each endpoint in this profile (in addition to the endpoint's own labels) | | map | | +Example CNI config: -For `Rule` details please see the [NetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) or [GlobalNetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy) resource. +```json +{ -## Supported operations[​](#supported-operations) + "name": "any_name", -| Datastore type | Create/Delete | Update | Get/List | Notes | -| --------------------- | ------------- | ------ | -------- | ----------------------------------------------------------------------------------- | -| Kubernetes API server | No | No | Yes | Calico Enterprise profiles are pre-assigned for each Namespace and Service Account. | + "cniVersion": "0.1.0", -### Remote cluster configuration + "type": "calico", -A remote cluster configuration resource (RemoteClusterConfiguration) represents a cluster in a federation of clusters. Each remote cluster needs a configuration to be specified to allow the local cluster to access resources on the remote cluster. The connection is one-way: the information flows only from the remote to the local cluster. To share information from the local cluster to the remote one a remote cluster configuration resource must be created on the remote cluster. + "ipam": { -A remote cluster configuration causes Typha and `calicoq` to retrieve the following resources from a remote cluster: + "type": "calico-ipam" -- [Workload endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint) -- [Host endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint) -- [Profiles](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) (rules are not retrieved from remote profiles, only the `LabelsToApply` field is used) + }, -When using the Kubernetes API datastore with RBAC enabled on the remote cluster, the RBAC rules must be configured to allow access to these resources. + "readiness_gates": ["http://localhost:9099/readiness", "http://localhost:8888/status"] -For more details on the federation feature refer to the [Overview](https://docs.tigera.io/calico-enterprise/latest/multicluster/federation/overview). +} +``` -For the meaning of the fields matches the configuration used for configuring `calicoctl`, see [Kubernetes datastore](https://docs.tigera.io/calico-enterprise/latest/operations/clis/calicoctl/configure/datastore) instructions for more details. +## Kubernetes specific[​](#kubernetes-specific) -This resource is not supported in `kubectl`. +When using the Calico Enterprise CNI plugin with Kubernetes, the plugin must be able to access the Kubernetes API server to find the labels assigned to the Kubernetes pods. The recommended way to configure access is through a `kubeconfig` file specified in the `kubernetes` section of the network config. e.g. -## Sample YAML[​](#sample-yaml) +```json +{ -For a remote Kubernetes datastore cluster: + "name": "any_name", -```yaml -apiVersion: projectcalico.org/v3 + "cniVersion": "0.1.0", -kind: RemoteClusterConfiguration + "type": "calico", -metadata: + "kubernetes": { - name: cluster1 + "kubeconfig": "/path/to/kubeconfig" -spec: + }, - datastoreType: kubernetes + "ipam": { - kubeconfig: /etc/tigera-federation-remotecluster/kubeconfig-rem-cluster-1 + "type": "calico-ipam" + + } + +} ``` -For a remote etcdv3 cluster: +As a convenience, the API location can also be configured directly, e.g. -```yaml -apiVersion: projectcalico.org/v3 +```json +{ -kind: RemoteClusterConfiguration + "name": "any_name", -metadata: + "cniVersion": "0.1.0", - name: cluster1 + "type": "calico", -spec: + "kubernetes": { - datastoreType: etcdv3 + "k8s_api_root": "http://127.0.0.1:8080" - etcdEndpoints: 'https://10.0.0.1:2379,https://10.0.0.2:2379' -``` + }, -## RemoteClusterConfiguration Definition[​](#remoteclusterconfiguration-definition) + "ipam": { -### Metadata[​](#metadata) + "type": "calico-ipam" -| Field | Description | Accepted Values | Schema | -| ----- | ---------------------------------------------- | ----------------------------------------- | ------ | -| name | The name of this remote cluster configuration. | Lower-case alphanumeric with optional `-` | string | + } -### Spec[​](#spec) +} +``` -| Field | Secret key | Description | Accepted Values | Schema | Default | -| ------------------- | ------------- | ------------------------------------------------------------------ | --------------------- | -------------------------- | ------- | -| clusterAccessSecret | | Reference to a Secret that contains connection information | | Kubernetes ObjectReference | none | -| datastoreType | datastoreType | The datastore type of the remote cluster. | `etcdv3` `kubernetes` | string | none | -| etcdEndpoints | etcdEndpoints | A comma separated list of etcd endpoints. | | string | none | -| etcdUsername | etcdUsername | Username for RBAC. | | string | none | -| etcdPassword | etcdPassword | Password for the given username. | | string | none | -| etcdKeyFile | etcdKey | Path to the etcd key file. | | string | none | -| etcdCertFile | etcdCert | Path to the etcd certificate file. | | string | none | -| etcdCACertFile | etcdCACert | Path to the etcd CA certificate file. | | string | none | -| kubeconfig | kubeconfig | Location of the `kubeconfig` file. | | string | none | -| k8sAPIEndpoint | | Location of the kubernetes API server. | | string | none | -| k8sKeyFile | | Location of a client key for accessing the Kubernetes API. | | string | none | -| k8sCertFile | | Location of a client certificate for accessing the Kubernetes API. | | string | none | -| k8sCAFile | | Location of a CA certificate. | | string | none | -| k8sAPIToken | | Token to be used for accessing the Kubernetes API. | | string | none | +### Enabling Kubernetes policy[​](#enabling-kubernetes-policy) -When using the `clusterAccessSecret` field, all other fields in the RemoteClusterconfiguration resource must be empty. When the `clusterAccessSecret` reference is used, all datastore configuration will be read from the referenced Secret using the "Secret key" fields named in the above table as the data key in the Secret. The fields read from a Secret that were file path or locations in a RemoteClusterConfiguration will be expected to be the file contents when read from a Secret. +If you wish to use the Kubernetes `NetworkPolicy` resource then you must set a policy type in the network config. There is a single supported policy type, `k8s`. When set, you must also run `tigera/kube-controllers` with the policy, profile, and workloadendpoint controllers enabled. -All of the fields that start with `etcd` are only valid when the DatastoreType is etcdv3 and the fields that start with `k8s` or `kube` are only valid when the datastore type is kubernetes. The `kubeconfig` field and the fields that end with `File` must be accessible to Typha and `calicoq`, this does not apply when the data is coming from a Secret referenced by `clusterAccessSecret`. +```json +{ -When the DatastoreType is `kubernetes`, the `kubeconfig` file is optional but since it can contain all of the authentication information needed to access the Kubernetes API server it is generally easier to use than setting all the individual `k8s` fields. The other kubernetes fields can be used by themselves though or to override specific kubeconfig values. + "name": "any_name", -## Supported operations[​](#supported-operations) + "cniVersion": "0.1.0", -| Datastore type | Create/Delete | Update | Get/List | Notes | -| --------------------- | ------------- | ------ | -------- | ----- | -| etcdv3 | Yes | Yes | Yes | | -| Kubernetes API server | Yes | Yes | Yes | | + "type": "calico", -### Security event webhook + "policy": { -A security event webhook (`SecurityEventWebhook`) is a cluster-scoped resource that represents instances of integrations with external systems through the webhook callback mechanism. + "type": "k8s" -For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases can be used to specify the resource type on the CLI: `securityeventwebhook.projectcalico.org`, `securityeventwebhooks.projectcalico.org` and abbreviations such as `securityeventwebhook.p` and `securityeventwebhooks.p`. + }, -## Sample YAML[​](#sample-yaml) + "kubernetes": { -```yaml -apiVersion: projectcalico.org/v3 + "kubeconfig": "/path/to/kubeconfig" -kind: SecurityEventWebhook + }, -metadata: + "ipam": { - name: jira-webhook + "type": "calico-ipam" - annotations: + } - webhooks.projectcalico.org/labels: 'Cluster name:Calico Enterprise' +} +``` -spec: +When using `type: k8s`, the Calico Enterprise CNI plugin requires read-only Kubernetes API access to the `Pods` resource in all namespaces. - consumer: Jira + - state: Enabled +### Enabling policy setup timeout[​](#enabling-policy-setup-timeout) - query: type=waf +The Calico Enterprise CNI plugin can be configured to prevent new pods from starting their containers until one of the following conditions occurs: - config: +- The pod's policy has finished being programmed. +- A specified amount of time has elapsed. - - name: url +By enabling this feature, you can avoid errors that can occur when a pod tries to start before the pod's policy is programmed by its host. - value: 'https://your-jira-instance-name.atlassian.net/rest/api/2/issue/' + - - name: project +**Tab: Operator** - value: PRJ +The policy setup timeout can be configured by setting the `linuxPolicySetupTimeoutSeconds` field in the [calicoNetwork spec](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#caliconetworkspec) of the default `operator.tigera.io/v1/installation` resource. - - name: issueType +The following example configures the CNI to delay a pod from starting its containers for up to 10 seconds, or until the pod's data plane has been programmed: - value: Bug +```yaml +kind: Installation - - name: username +apiVersion: operator.tigera.io/v1 - valueFrom: +metadata: - secretKeyRef: + name: default - name: jira-secrets +spec: - key: username + calicoNetwork: - - name: apiToken + linuxPolicySetupTimeoutSeconds: 10 +``` - valueFrom: +**Tab: Manifest** - secretKeyRef: +The policy setup timeout can be configured by setting the `policy_setup_timeout_seconds` option in the CNI config. - name: jira-secrets +Example CNI config: - key: token -``` +```json +{ + + "name": "any_name", + + "cniVersion": "0.1.0", -## Security event webhook definition[​](#security-event-webhook-definition) + "type": "calico", -### Metadata[​](#metadata) + "policy_setup_timeout_seconds": 10, -| Field | Description | Accepted Values | Schema | -| ----- | --------------------------------------------------------- | --------------------------------------------------- | ------ | -| name | Unique name to describe this resource instance. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | + "ipam": { -#### Annotations[​](#annotations) + "type": "calico-ipam" -Security event webhooks provide an easy way to add arbitrary data to the webhook generated HTTP payload through the metadata annotation. The value of the `webhooks.projectcalico.org/labels`, if present, will be converted into the payload labels. The value must conform to the following rules: + } -- Key and value data for a single label are separated by the `:` character, -- Multiple labels are separated by the `,` character. +} +``` -### Spec[​](#spec) +The Calico Enterprise CNI plugin reads Felix's `endpoint-status` directory to determine when the data plane has been programmed for a pod. If left unset, the Calico Enterprise CNI plugin will look for the directory at `/var/run/calico/endpoint-status`. The path `/var/run/calico` is commonly mounted to the Calico Enterprise DaemonSet, meaning it can be written to by the Felix container, and read by the (host-namespace) Calico Enterprise CNI plugin. To enable the `endpoint-status` directory, and adjust which directory of the Felix container it is written to, the `endpointStatusPathPrefix` option must be configured for [Felix](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/configuration). -| Field | Description | Accepted Values | Schema | Required | -| -------- | -------------------------------------------------------------------------------------------------------------- | ---------------------------------- | ----------------------------------------------------------------------- | -------- | -| consumer | Specifies intended consumer of the webhook. | Slack, Jira, Alertmanager, Generic | string | yes | -| state | Defines current state of the webhook. | Enabled, Disabled, Debug | string | yes | -| query | Defines query used to retrieve security events from Calico. | [see Query](#query) | string | yes | -| config | Webhook configuration, required contents of this structure is determined by the value of the `consumer` field. | [see Config](#configuration) | list of [SecurityEventWebhookConfigVar](#securityeventwebhookconfigvar) | yes | +To adjust where the Calico Enterprise CNI plugin looks for the `endpoint-status` directory in the host filesystem, you must set the `endpoint_status_dir` option. -### SecurityEventWebhookConfigVar[​](#securityeventwebhookconfigvar) +Example CNI config: -| Field | Description | Schema | Required | -| --------- | ------------------------------------------------------------------------- | --------------------------------------------------------------------------- | ----------------------------------- | -| name | Configuration variable name. | string | yes | -| value | Direct value for the variable. | string | yes if `valueFrom` is not specified | -| valueFrom | Value defined either in a Kubernetes ConfigMap or in a Kubernetes Secret. | [SecurityEventWebhookConfigVarSource](#securityeventwebhookconfigvarsource) | yes if `value` is not specified | +```json +{ -### SecurityEventWebhookConfigVarSource[​](#securityeventwebhookconfigvarsource) + "name": "any_name", -| Field | Description | Schema | Required | -| --------------- | ------------------------------- | ------------------------------------------------------------------------------------------------------------ | ----------------------------------------- | -| configMapKeyRef | Kubernetes ConfigMap reference. | `ConfigMapKeySelector` (referenced ConfigMap key should exist in the `tigera-intrusion-detection` namespace) | yes if `secretKeyRef` is not specified | -| secretKeyRef | Kubernetes Secret reference. | `SecretKeySelector` (referenced Secret key should exist in the `tigera-intrusion-detection` namespace) | yes if `configMapKeyRef` is not specified | + "cniVersion": "0.1.0", -### Status[​](#status) + "type": "calico", -Field `status` reflects the health of a webhook. It is a list of [Kubernetes Conditions](https://pkg.go.dev/k8s.io/apimachinery@v0.23.0/pkg/apis/meta/v1#Condition). + "policy_setup_timeout_seconds": 10, -## Query[​](#query) + "endpoint_status_dir": "/path/to/endpoint-status", -Security event webhooks use a domain-specific query language to select which records from the data set should trigger the HTTP request. + "ipam": { -The query language is composed of any number of selectors, combined with boolean expressions (`AND`, `OR`, and `NOT`), set expressions (`IN` and `NOTIN`) and bracketed subexpressions. These are translated by Calico Enterprise to Elastic DSL queries that are executed on the backend. + "type": "calico-ipam" -Set expressions support wildcard operators asterisk (`*`) and question mark (`?`). The asterisk sign matches zero or more characters and the question mark matches a single character. + } -A selector consists of a key, comparator, and value. Keys and values may be identifiers consisting of alphanumerics and underscores (`_`) with the first character being alphabetic or an underscore, or may be quoted strings. Values may also be integer or floating point numbers. Comparators may be `=` (equal), `!=` (not equal), `<` (less than), `<=` (less than or equal), `>` (greater than), or `>=` (greater than or equal). +} +``` -## Configuration[​](#configuration) + -Data required to be present in the `config` section of the security event webhook `spec` depends on the intended consumer for the HTTP requests generated by the webhook. The value in the `consumer` field of the `spec` specifies the consumer and therefore data that is required to be present. Currently Calico supports the following consumers: `Slack`, `Jira`, `Alertmanager` and `Generic`. Payloads generated by the webhook will be different for each of the listed use cases. +## IPAM[​](#ipam-1) -### Slack[​](#slack) +### Using host-local IPAM[​](#using-host-local-ipam) -Data fields required for the `Slack` value present in the `spec.consumer` field of a webhook: +Calico can be configured to use [host-local IPAM](https://www.cni.dev/plugins/current/ipam/host-local/) instead of the default `calico-ipam`. Host local IPAM uses a pre-determined CIDR per-host, and stores allocations locally on each node. This is in contrast to Calico IPAM, which dynamically allocates blocks of addresses and single addresses alike in response to cluster needs. -| Field | Description | Required | -| ----- | ------------------------------------------------------------------------------- | -------- | -| url | A valid Slack [Incoming Webhook URL](https://api.slack.com/messaging/webhooks). | yes | +Host local IPAM is generally only used on clusters where integration with the Kubernetes [route controller](https://kubernetes.io/docs/concepts/architecture/cloud-controller/#route-controller) is necessary. Note that some Calico features - such as the ability to request a specific address or pool for a pod - require Calico IPAM to function, and will not work with host-local IPAM enabled. -### Generic[​](#generic) + -Data fields required for the `Generic` value present in the `spec.consumer` field of a webhook: +**Tab: Operator** -| Field | Description | Required | -| ----- | ---------------------------------------------------- | -------- | -| url | A generic and valid URL of another HTTP(s) endpoint. | yes | +The `host-local` IPAM plugin can be configured by setting the `Spec.CNI.IPAM.Plugin` field to `HostLocal` on the [operator.tigera.io/Installation](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#installation) API. -### Jira[​](#jira) +Calico will use the `host-local` IPAM plugin to allocate IPv4 addresses from the node's IPv4 pod CIDR if there is an IPv4 pool configured in `Spec.IPPools`, and an IPv6 address from the node's IPv6 pod CIDR if there is an IPv6 pool configured in `Spec.IPPools`. -Data fields required for the `Jira` value present in the `spec.consumer` field of a webhook: +The following example configures Calico to assign dual-stack IPs to pods using the host-local IPAM plugin. -| Field | Description | Required | -| --------- | ---------------------------------------------------------------------- | -------- | -| url | URL of a Jira REST API v2 endpoint for the organisation. | yes | -| project | A valid Jira project abbreviation. | yes | -| issueType | A valid issue type for the selected project, examples: `Bug` or `Task` | yes | -| username | A valid Jira user name. | yes | -| apiToken | A valid Jira API token for the user. | yes | +```yaml +kind: Installation -### Alertmanager[​](#alertmanager) +apiVersion: operator.tigera.io/v1 -Data fields required for the `Alertmanager` value present in the `spec.consumer` field of a webhook: +metadata: -| Field | Description | Required | -| --------- | --------------------------------------------------------------------------------------- | -------- | -| url | URL of the Alertmanager REST API v2 endpoint for alerts (ending with `/api/v2/alerts`). | yes | -| basicAuth | MD5 checksum of username and password separated by the colon character. | no | -| ca.crt | Certificate authority in PEM format (required for mTLS configuration). | no | -| tls.key | Private key in PEM format (required for mTLS configuration). | no | -| tls.crt | Certificate in PEM format (required for mTLS configuration). | no | + name: default -### Staged global network policy +spec: - + calicoNetwork: - + ipPools: - + - cidr: 192.168.0.0/16 - + - cidr: 2001:db8::/64 - + cni: - + type: Calico - + ipam: -A staged global network policy resource (`StagedGlobalNetworkPolicy`) represents an ordered set of rules which are applied to a collection of endpoints that match a [label selector](#selector). These rules are used to preview network behavior and do not enforce network traffic. For enforcing network traffic, see [global network policy resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy). + type: HostLocal +``` -`StagedGlobalNetworkPolicy` is not a namespaced resource. `StagedGlobalNetworkPolicy` applies to [workload endpoint resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint) in all namespaces, and to [host endpoint resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint). Select a namespace in a `StagedGlobalNetworkPolicy` in the standard selector by using `projectcalico.org/namespace` as the label name and a `namespace` name as the value to compare against, e.g., `projectcalico.org/namespace == "default"`. See [staged network policy resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/stagednetworkpolicy) for staged namespaced network policy. +**Tab: Manifest** -`StagedGlobalNetworkPolicy` resources can be used to define network connectivity rules between groups of Calico Enterprise endpoints and host endpoints, and take precedence over [Profile resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) if any are defined. +When using the CNI `host-local` IPAM plugin, two special values - `usePodCidr` and `usePodCidrIPv6` - are allowed for the subnet field (either at the top-level, or in a "range"). This tells the plugin to determine the subnet to use from the Kubernetes API based on the Node.podCIDR field. Calico Enterprise does not use the `gateway` field of a range so that field is not required and it will be ignored if present. -StagedGlobalNetworkPolicies are organized into [tiers](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), which provide an additional layer of ordering—in particular, note that the `Pass` action skips to the next [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), to enable hierarchical security policy. +> **SECONDARY:** `usePodCidr` and `usePodCidrIPv6` can only be used as the value of the `subnet` field, it cannot be used in `rangeStart` or `rangeEnd` so those values are not useful if `subnet` is set to `usePodCidr`. -For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `stagedglobalnetworkpolicy.projectcalico.org`, `stagedglobalnetworkpolicies.projectcalico.org` and abbreviations such as `stagedglobalnetworkpolicy.p` and `stagedglobalnetworkpolicies.p`. +Calico Enterprise supports the host-local IPAM plugin's `routes` field as follows: -## Sample YAML[​](#sample-yaml) +- If there is no `routes` field, Calico Enterprise will install a default `0.0.0.0/0`, and/or `::/0` route into the pod (depending on whether the pod has an IPv4 and/or IPv6 address). -This sample policy allows TCP traffic from `frontend` endpoints to port 6379 on `database` endpoints. +- If there is a `routes` field then Calico Enterprise will program *only* the routes in the routes field into the pod. Since Calico Enterprise implements a point-to-point link into the pod, the `gw` field is not required and it will be ignored if present. All routes that Calico Enterprise installs will have Calico Enterprise's link-local IP as the next hop. -```yaml -apiVersion: projectcalico.org/v3 +Calico Enterprise CNI plugin configuration: -kind: StagedGlobalNetworkPolicy +- `node_name` + - The node name to use when looking up the CIDR value (defaults to current hostname) -metadata: +```json +{ - name: internal-access.allow-tcp-6379 + "name": "any_name", -spec: + "cniVersion": "0.1.0", - tier: internal-access + "type": "calico", - selector: role == 'database' + "kubernetes": { - types: + "kubeconfig": "/path/to/kubeconfig", - - Ingress + "node_name": "node-name-in-k8s" - - Egress + }, - ingress: + "ipam": { - - action: Allow + "type": "host-local", - protocol: TCP + "ranges": [[{ "subnet": "usePodCidr" }], [{ "subnet": "usePodCidrIPv6" }]], - source: + "routes": [{ "dst": "0.0.0.0/0" }, { "dst": "2001:db8::/96" }] - selector: role == 'frontend' + } - destination: +} +``` - ports: +When making use of the `usePodCidr` or `usePodCidrIPv6` options, the Calico Enterprise CNI plugin requires read-only Kubernetes API access to the `Nodes` resource. - - 6379 +#### Configuring node and typha[​](#configuring-node-and-typha) - egress: +When using `host-local` IPAM with the Kubernetes API datastore, you must configure both node and the Typha deployment to use the `Node.podCIDR` field by setting the environment variable `USE_POD_CIDR=true` in each. - - action: Allow -``` + -## Definition[​](#definition) +### Using Kubernetes annotations[​](#using-kubernetes-annotations) -### Metadata[​](#metadata) +#### Specifying IP pools on a per-namespace or per-pod basis[​](#specifying-ip-pools-on-a-per-namespace-or-per-pod-basis) -| Field | Description | Accepted Values | Schema | Default | -| ----- | ----------------------------------------- | --------------------------------------------------- | ------ | ------- | -| name | The name of the network policy. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | | +In addition to specifying IP pools in the CNI config as discussed above, Calico Enterprise IPAM supports specifying IP pools per-namespace or per-pod using the following [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/). -### Spec[​](#spec) +- `cni.projectcalico.org/ipv4pools`: A list of configured IPv4 Pools from which to choose an address for the pod. -| Field | Description | Accepted Values | Schema | Default | -| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------- | --------------------- | --------------------------------------------- | -| order | Controls the order of precedence. Calico Enterprise applies the policy with the lowest value first. | | float | | -| tier | Name of the [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier) this policy belongs to. | | string | `default` | -| selector | Selects the endpoints to which this policy applies. | | [selector](#selector) | all() | -| serviceAccountSelector | Selects the service account(s) to which this policy applies. Select all service accounts in the cluster with a specific name using the `projectcalico.org/name` label. | | [selector](#selector) | all() | -| namespaceSelector | Selects the namespace(s) to which this policy applies. Select a specific namespace by name using the `projectcalico.org/name` label. | | [selector](#selector) | all() | -| types | Applies the policy based on the direction of the traffic. To apply the policy to inbound traffic, set to `Ingress`. To apply the policy to outbound traffic, set to `Egress`. To apply the policy to both, set to `Ingress, Egress`. | `Ingress`, `Egress` | List of strings | Depends on presence of ingress/egress rules\* | -| ingress | Ordered list of ingress rules applied by policy. | | List of [Rule](#rule) | | -| egress | Ordered list of egress rules applied by this policy. | | List of [Rule](#rule) | | -| doNotTrack\*\* | Indicates to apply the rules in this policy before any data plane connection tracking, and that packets allowed by these rules should not be tracked. | true, false | boolean | false | -| preDNAT\*\* | Indicates to apply the rules in this policy before any DNAT. | true, false | boolean | false | -| applyOnForward\*\* | Indicates to apply the rules in this policy on forwarded traffic as well as to locally terminated traffic. | true, false | boolean | false | -| performanceHints | Contains a list of hints to Calico's policy engine to help process the policy more efficiently. Hints never change the enforcement behaviour of the policy. The available hints are described [below](#performance-hints). | `AssumeNeededOnEveryNode` | List of strings | | + Example: -\* If `types` has no value, Calico Enterprise defaults as follows. + ```yaml + annotations: -> | Ingress Rules Present | Egress Rules Present | `Types` value | -> | --------------------- | -------------------- | ----------------- | -> | No | No | `Ingress` | -> | Yes | No | `Ingress` | -> | No | Yes | `Egress` | -> | Yes | Yes | `Ingress, Egress` | + 'cni.projectcalico.org/ipv4pools': '["default-ipv4-ippool"]' + ``` -\*\* The `doNotTrack` and `preDNAT` and `applyOnForward` fields are meaningful only when applying policy to a [host endpoint](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint). +- `cni.projectcalico.org/ipv6pools`: A list of configured IPv6 Pools from which to choose an address for the pod. -Only one of `doNotTrack` and `preDNAT` may be set to `true` (in a given policy). If they are both `false`, or when applying the policy to a [workload endpoint](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint), the policy is enforced after connection tracking and any DNAT. + Example: -`applyOnForward` must be set to `true` if either `doNotTrack` or `preDNAT` is `true` because for a given policy, any untracked rules or rules before DNAT will in practice apply to forwarded traffic. + ```yaml + annotations: -See [Using Calico Enterprise to Secure Host Interfaces](https://docs.tigera.io/calico-enterprise/latest/reference/host-endpoints/) for how `doNotTrack` and `preDNAT` and `applyOnForward` can be useful for host endpoints. + 'cni.projectcalico.org/ipv6pools': '["2001:db8::1/120"]' + ``` -### Rule[​](#rule) +If provided, these IP pools will override any IP pools specified in the CNI config. -A single rule matches a set of packets and applies some action to them. When multiple rules are specified, they are executed in order. +> **SECONDARY:** This requires the IP pools to exist before `ipv4pools` or `ipv6pools` annotations are used. Requesting a subset of an IP pool is not supported. IP pools requested in the annotations must exactly match a configured [IPPool](https://docs.tigera.io/calico-enterprise/latest/reference/resources/ippool) resource. -| Field | Description | Accepted Values | Schema | Default | -| ----------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------ | ----------------------------- | ------- | -| metadata | Per-rule metadata. | | [RuleMetadata](#rulemetadata) | | -| action | Action to perform when matching this rule. | `Allow`, `Deny`, `Log`, `Pass` | string | | -| protocol | Positive protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | -| notProtocol | Negative protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | -| icmp | ICMP match criteria. | | [ICMP](#icmp) | | -| notICMP | Negative match on ICMP. | | [ICMP](#icmp) | | -| ipVersion | Positive IP version match. | `4`, `6` | integer | | -| source | Source match parameters. | | [EntityRule](#entityrule) | | -| destination | Destination match parameters. | | [EntityRule](#entityrule) | | -| http | Match HTTP request parameters. Application layer policy must be enabled to use this field. | | [HTTPMatch](#httpmatch) | | +> **SECONDARY:** The Calico Enterprise CNI plugin supports specifying an annotation per namespace. If both the namespace and the pod have this annotation, the pod information will be used. Otherwise, if only the namespace has the annotation the annotation of the namespace will be used for each pod in it. -After a `Log` action, processing continues with the next rule; `Allow` and `Deny` are immediate and final and no further rules are processed. +#### Requesting a specific IP address[​](#requesting-a-specific-ip-address) -An `action` of `Pass` in a `NetworkPolicy` or `GlobalNetworkPolicy` will skip over the remaining policies and jump to the first [profile](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) assigned to the endpoint, applying the policy configured in the profile; if there are no Profiles configured for the endpoint the default applied action is `Deny`. +You can also request a specific IP address through [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) with Calico Enterprise IPAM. There are two annotations to request a specific IP address: -### RuleMetadata[​](#rulemetadata) +- `cni.projectcalico.org/ipAddrs`: A list of IPv4 and/or IPv6 addresses to assign to the Pod. The requested IP addresses will be assigned from Calico Enterprise IPAM and must exist within a configured IP pool. -Metadata associated with a specific rule (rather than the policy as a whole). The contents of the metadata does not affect how a rule is interpreted or enforced; it is simply a way to store additional information for use by operators or applications that interact with Calico Enterprise. + Example: -| Field | Description | Schema | Default | -| ----------- | ----------------------------------- | ----------------------- | ------- | -| annotations | Arbitrary non-identifying metadata. | map of string to string | | + ```yaml + annotations: -Example: + 'cni.projectcalico.org/ipAddrs': '["192.168.0.1"]' + ``` -```yaml -metadata: +- `cni.projectcalico.org/ipAddrsNoIpam`: A list of IPv4 and/or IPv6 addresses to assign to the Pod, bypassing IPAM. Any IP conflicts and routing have to be taken care of manually or by some other system. Calico Enterprise will only distribute routes to a Pod if its IP address falls within a Calico Enterprise IP pool using BGP mode. Calico will not distribute ipAddrsNoIpam routes when operating in VXLAN mode. If you assign an IP address that is not in a Calico Enterprise IP pool or if its IP address falls within a Calico Enterprise IP pool that uses VXLAN encapsulation, you must ensure that routing to that IP address is taken care of through another mechanism. - annotations: + Example: - app: database + ```yaml + annotations: - owner: devops -``` + 'cni.projectcalico.org/ipAddrsNoIpam': '["10.0.0.1"]' + ``` -Annotations follow the [same rules as Kubernetes for valid syntax and character set](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set). + The ipAddrsNoIpam feature is disabled by default. It can be enabled in the feature\_control section of the CNI network config: -On Linux with the iptables data plane, rule annotations are rendered as comments in the form `-m comment --comment "="` on the iptables rule(s) that correspond to the Calico Enterprise rule. + ```json + { -### ICMP[​](#icmp) + "name": "any_name", -| Field | Description | Accepted Values | Schema | Default | -| ----- | ------------------- | -------------------- | ------- | ------- | -| type | Match on ICMP type. | Can be integer 0-254 | integer | | -| code | Match on ICMP code. | Can be integer 0-255 | integer | | + "cniVersion": "0.1.0", -### EntityRule[​](#entityrule) + "type": "calico", -Entity rules specify the attributes of the source or destination of a packet that must match for the rule as a whole to match. Packets can be matched on combinations of: + "ipam": { -- Identity of the source/destination, by using [Selectors](#selectors) or by specifying a particular Kubernetes `Service`. Selectors can match [workload endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint), [host endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint) and ([namespaced](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkset) or [global](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset)) network sets. -- Source/destination IP address, protocol and port. + "type": "calico-ipam" -If the rule contains multiple match criteria (for example, an IP and a port) then all match criteria must match for the rule as a whole to match a packet. + }, -| Field | Description | Accepted Values | Schema | Default | -| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------------------------------------------- | ------- | -| nets | Match packets with IP in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs | | -| notNets | Negative match on CIDRs. Match packets with IP not in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs | | -| selector | Positive match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | | -| notSelector | Negative match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | | -| namespaceSelector | Positive match on selected namespaces. If specified, only workload endpoints in the selected Kubernetes namespaces are matched. Matches namespaces based on the labels that have been applied to the namespaces. Defines the scope that selectors will apply to, if not defined then selectors apply to the NetworkPolicy's namespace. Match a specific namespace by name using the `projectcalico.org/name` label. Select the non-namespaced resources like GlobalNetworkSet(s), host endpoints to which this policy applies by using `global()` selector. | Valid selector | [selector](#selector) | | -| ports | Positive match on the specified ports | | list of [ports](#ports) | | -| domains | Positive match on [domain names](#exact-and-wildcard-domain-names). | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list of strings | | -| notPorts | Negative match on the specified ports | | list of [ports](#ports) | | -| serviceAccounts | Match endpoints running under service accounts. If a `namespaceSelector` is also defined, the set of service accounts this applies to is limited to the service accounts in the selected namespaces. | | [ServiceAccountMatch](#serviceaccountmatch) | | -| services | Match the specified service(s). If specified on egress rule destinations, no other selection criteria can be set. If specified on ingress rule sources, only positive or negative matches on ports can be specified. | | [ServiceMatch](#servicematch) | | + "feature_control": { -> **SECONDARY:** You cannot mix IPv4 and IPv6 CIDRs in a single rule using `nets` or `notNets`. If you need to match both, create 2 rules. + "ip_addrs_no_ipam": true -#### Selector performance in EntityRules[​](#selector-performance-in-entityrules) + } -When rendering policy into the data plane, Calico Enterprise must identify the endpoints that match the selectors in all active rules. This calculation is optimized for certain common selector types. Using the optimized selector types reduces CPU usage (and policy rendering time) by orders of magnitude. This becomes important at high scale (hundreds of active rules, hundreds of thousands of endpoints). + } + ``` -The optimized operators are as follows: + > **WARNING:** This feature allows for the bypassing of network policy via IP spoofing. Users should make sure the proper admission control is in place to prevent users from selecting arbitrary IP addresses. -- `label == "value"` -- `label in { 'v1', 'v2' }` -- `has(label)` -- ` && ` is optimized if **either** `` or `` is optimized. +> **SECONDARY:** +> +> - The `ipAddrs` and `ipAddrsNoIpam` annotations can't be used together. +> - You can only specify one IPv4/IPv6 or one IPv4 and one IPv6 address with these annotations. +> - When `ipAddrs` or `ipAddrsNoIpam` is used with `ipv4pools` or `ipv6pools`, `ipAddrs` / `ipAddrsNoIpam` take priority. -The following perform like `has(label)`. All endpoints with the label will be scanned to find matches: +#### Requesting a floating IP[​](#requesting-a-floating-ip) -- `label contains 's'` -- `label starts with 's'` -- `label ends with 's'` +You can request a floating IP address for a pod through [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) with Calico Enterprise. -The other operators, and in particular, `all()`, `!`, `||` and `!=` are not optimized. +> **SECONDARY:** The specified address must belong to an IP Pool for advertisement to work properly. -Examples: +- `cni.projectcalico.org/floatingIPs`: A list of floating IPs which will be assigned to the pod's workload endpoint. -- `a == 'b'` - optimized -- `a == 'b' && has(c)` - optimized -- `a == 'b' || has(c)` - **not** optimized due to use of `||` -- `c != 'd'` - **not** optimized due to use of `!=` -- `!has(a)` - **not** optimized due to use of `!` -- `a == 'b' && c != 'd'` - optimized, `a =='b'` is optimized so `a == 'b' && ` is optimized. -- `c != 'd' && a == 'b'` - optimized, `a =='b'` is optimized so ` && a == 'b'` is optimized. + Example: -### Exact and wildcard domain names[​](#exact-and-wildcard-domain-names) + ```yaml + annotations: -The `domains` field is only valid for egress Allow rules. It restricts the rule to apply only to traffic to one of the specified domains. If this field is specified, the parent [Rule](#rule)'s `action` must be `Allow`, and `nets` and `selector` must both be left empty. + 'cni.projectcalico.org/floatingIPs': '["10.0.0.1"]' + ``` -When a configured domain name has no wildcard (`*`), it matches exactly that domain name. For example: + The floatingIPs feature is disabled by default. It can be enabled in the feature\_control section of the CNI network config: -- `microsoft.com` -- `tigera.io` + ```json + { -With a single asterisk in any part of the domain name, it matches 1 or more path components at that position. For example: + "name": "any_name", -- `*.google.com` matches `www.google.com` and `www.ipv6.google.com`, but not `google.com` -- `www.*.com` matches `www.sun.com` and `www.apple.com`, but not `www.com` -- `update.*.mycompany.com` matches `update.tools.mycompany.com`, `update.secure.suite.mycompany.com`, and so on + "cniVersion": "0.1.0", -**Not** supported are: + "type": "calico", -- Multiple wildcards in the same domain, for example: `*.*.mycompany.com` -- Asterisks that are not the entire component, for example: `www.g*.com` -- A wildcard as the last component, for example: `www.mycompany.*` -- More general wildcards, such as regular expressions + "ipam": { -> **SECONDARY:** Calico Enterprise implements policy for domain names by learning the corresponding IPs from DNS, then programming rules to allow those IPs. This means that if multiple domain names A, B and C all map to the same IP, and there is domain-based policy to allow A, traffic to B and C will be allowed as well. + "type": "calico-ipam" -### Selector[​](#selector) + }, -A label selector is an expression which either matches or does not match a resource based on its labels. + "feature_control": { -Calico Enterprise label selectors support a number of operators, which can be combined into larger expressions using the boolean operators and parentheses. + "floating_ips": true -| Expression | Meaning | -| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| **Logical operators** | | -| `( )` | Matches if and only if `` matches. (Parentheses are used for grouping expressions.) | -| `! ` | Matches if and only if `` does not match. **Tip:** `!` is a special character at the start of a YAML string, if you need to use `!` at the start of a YAML string, enclose the string in quotes. | -| ` && ` | "And": matches if and only if both ``, and, `` matches | -| ` \|\| ` | "Or": matches if and only if either ``, or, `` matches. | -| **Match operators** | | -| `all()` | Match all in-scope resources. To match *no* resources, combine this operator with `!` to form `!all()`. | -| `global()` | Match all non-namespaced resources. Useful in a `namespaceSelector` to select global resources such as global network sets. | -| `k == 'v'` | Matches resources with the label 'k' and value 'v'. | -| `k != 'v'` | Matches resources without label 'k' or with label 'k' and value *not* equal to `v` | -| `has(k)` | Matches resources with label 'k', independent of value. To match pods that do not have label `k`, combine this operator with `!` to form `!has(k)` | -| `k in { 'v1', 'v2' }` | Matches resources with label 'k' and value in the given set | -| `k not in { 'v1', 'v2' }` | Matches resources without label 'k' or with label 'k' and value *not* in the given set | -| `k contains 's'` | Matches resources with label 'k' and value containing the substring 's' | -| `k starts with 's'` | Matches resources with label 'k' and value starting with the substring 's' | -| `k ends with 's'` | Matches resources with label 'k' and value ending with the substring 's' | + } -Operators have the following precedence: + } + ``` -- **Highest**: all the match operators -- Parentheses `( ... )` -- Negation with `!` -- Conjunction with `&&` -- **Lowest**: Disjunction with `||` + > **WARNING:** This feature can allow pods to receive traffic which may not have been intended for that pod. Users should make sure the proper admission control is in place to prevent users from selecting arbitrary floating IP addresses. -For example, the expression +### Using IP pools node selectors[​](#using-ip-pools-node-selectors) -```text -! has(my-label) || my-label starts with 'prod' && role in {'frontend','business'} -``` +Nodes will only assign workload addresses from IP pools which select them. By default, IP pools select all nodes, but this can be configured using the `nodeSelector` field. Check out the [IP pool resource document](https://docs.tigera.io/calico-enterprise/latest/reference/resources/ippool) for more details. -Would be "bracketed" like this: +Example: -```text -((!(has(my-label)) || ((my-label starts with 'prod') && (role in {'frontend','business'})) -``` +1. Create (or update) an IP pool that only allocates IPs for nodes where it contains a label `rack=0`. -It would match: + ```bash + kubectl create -f -< + metadata: - - Has a value for `my-label` that starts with "prod", and, - - Has a role label with value either "frontend", or "business". + name: rack-0-ippool -### Ports[​](#ports) + spec: -Calico Enterprise supports the following syntaxes for expressing ports. + cidr: 192.168.0.0/24 -| Syntax | Example | Description | -| --------- | ---------- | ------------------------------------------------------------------- | -| int | 80 | The exact (numeric) port specified | -| start:end | 6040:6050 | All (numeric) ports within the range start ≤ x ≤ end | -| string | named-port | A named port, as defined in the ports list of one or more endpoints | + ipipMode: Always -An individual numeric port may be specified as a YAML/JSON integer. A port range or named port must be represented as a string. For example, this would be a valid list of ports: + natOutgoing: true -```yaml -ports: [8080, '1234:5678', 'named-port'] -``` + nodeSelector: rack == "0" -#### Named ports[​](#named-ports) + EOF + ``` -Using a named port in an `EntityRule`, instead of a numeric port, gives a layer of indirection, allowing for the named port to map to different numeric values for each endpoint. +2. Label a node with `rack=0`. -For example, suppose you have multiple HTTP servers running as workloads; some exposing their HTTP port on port 80 and others on port 8080. In each workload, you could create a named port called `http-port` that maps to the correct local port. Then, in a rule, you could refer to the name `http-port` instead of writing a different rule for each type of server. + ```bash + kubectl label nodes kube-node-0 rack=0 + ``` -> **SECONDARY:** Since each named port may refer to many endpoints (and Calico Enterprise has to expand a named port into a set of endpoint/port combinations), using a named port is considerably more expensive in terms of CPU than using a simple numeric port. We recommend that they are used sparingly, only where the extra indirection is required. +Check out the usage guide on [assign IP addresses based on topology](https://docs.tigera.io/calico-enterprise/latest/networking/ipam/assign-ip-addresses-topology) -### ServiceAccountMatch[​](#serviceaccountmatch) +for a full example. -A ServiceAccountMatch matches service accounts in an EntityRule. +### CNI network configuration lists[​](#cni-network-configuration-lists) -| Field | Description | Schema | -| -------- | ------------------------------- | --------------------- | -| names | Match service accounts by name | list of strings | -| selector | Match service accounts by label | [selector](#selector) | +The CNI 0.3.0 [spec](https://github.com/containernetworking/cni/blob/spec-v0.3.0/SPEC.md#network-configuration-lists) supports "chaining" multiple CNI plugins together. Calico Enterprise supports the following Kubernetes CNI plugins, which are enabled by default. Although chaining other CNI plugins may work, we support only the following tested CNI plugins. -### ServiceMatch[​](#servicematch) +**Port mapping plugin** -A ServiceMatch matches a service in an EntityRule. +Calico Enterprise is required to implement Kubernetes host port functionality and is enabled by default. -| Field | Description | Schema | -| --------- | ------------------------ | ------ | -| name | The service's name. | string | -| namespace | The service's namespace. | string | +> **SECONDARY:** Be aware of the following [portmap plugin CNI issue](https://github.com/containernetworking/cni/issues/605) where draining nodes may take a long time with a cluster of 100+ nodes and 4000+ services. -### Performance Hints[​](#performance-hints) +To disable it, remove the portmap section from the CNI network configuration in the Calico Enterprise manifests. -Performance hints provide a way to tell Calico Enterprise about the intended use of the policy so that it may process it more efficiently. Currently only one hint is defined: +```json +{ -- `AssumeNeededOnEveryNode`: normally, Calico Enterprise only calculates a policy's rules and selectors on nodes where the policy is actually in use (i.e. its selector matches a local endpoint). This saves work in most cases. The `AssumeNeededOnEveryNode` hint tells Calico Enterprise to treat the policy as "in use" on *every* node. This is useful for large policy sets that are known to apply to all (or nearly all) endpoints. It effectively "preloads" the policy on every node so that there is less work to do when the first endpoint matching the policy shows up. It also prevents work from being done to tear down the policy when the last endpoint is drained. + "type": "portmap", -## Supported operations[​](#supported-operations) + "snat": true, -| Datastore type | Create/Delete | Update | Get/List | Notes | -| ------------------------ | ------------- | ------ | -------- | ----- | -| Kubernetes API datastore | Yes | Yes | Yes | | + "capabilities": { "portMappings": true } -#### List filtering on tiers[​](#list-filtering-on-tiers) +} +``` -List and watch operations may specify label selectors or field selectors to filter `StagedGlobalNetworkPolicy` resources on tiers returned by the API server. When no selector is specified, the API server returns all `StagedGlobalNetworkPolicy` resources from all tiers that the user has access to. +### Order of precedence[​](#order-of-precedence) -##### Field selector[​](#field-selector) +If more than one of these methods are used for IP address assignment, they will take on the following precedence, 1 being the highest: -When using the field selector, supported operators are `=` and `==` +1. Kubernetes annotations +2. CNI configuration +3. IP pool node selectors -The following example shows how to retrieve all `StagedGlobalNetworkPolicy` resources in the default tier: +> **SECONDARY:** Calico Enterprise IPAM will not reassign IP addresses to workloads that are already running. To update running workloads with IP addresses from a newly configured IP pool, they must be recreated. We recommend doing this before going into production or during a maintenance window. -```bash -kubectl get stagedglobalnetworkpolicy --field-selector spec.tier=default -``` +### Specify num\_queues for veth interfaces[​](#specify-num_queues-for-veth-interfaces) -##### Label selector[​](#label-selector) +`num_rx_queues` and `num_tx_queues` can be set using the `num_queues` option in the CNI configuration. Default: 1 -When using the label selector, supported operators are `=`, `==` and `IN`. +For example: -The following example shows how to retrieve all `StagedGlobalNetworkPolicy` resources in the `default` and `net-sec` tiers: +```json +{ -```bash -kubectl get stagedglobalnetworkpolicy -l 'projectcalico.org/tier in (default, net-sec)' -``` + "num_queues": 3 -### Staged Kubernetes network policy +} +``` -A staged kubernetes network policy resource (`StagedKubernetesNetworkPolicy`) represents a staged version of [Kubernetes network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies). This is used to preview network behavior before actually enforcing the network policy. Once persisted, this will create a Kubernetes network policy backed by a Calico Enterprise [network policy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy). +### Configure resource requests and limits -For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `stagedkubernetesnetworkpolicy.projectcalico.org`, `stagedkubernetesnetworkpolicies.projectcalico.org` and abbreviations such as `stagedkubernetesnetworkpolicy.p` and `stagedkubernetesnetworkpolicies.p`. +## Big picture[​](#big-picture) -## Sample YAML[​](#sample-yaml) +Resource requests and limits are essential configurations for managing resource allocation and ensuring optimal performance of Kubernetes workloads. In Calico Enterprise, these configurations can be customized using custom resources to meet specific requirements and optimize resource utilization. -Below is a sample policy created from the example policy from the [Kubernetes NetworkPolicy documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource). The only difference between this policy and the example Kubernetes version is that the `apiVersion` and `kind` are changed to properly specify a staged Kubernetes network policy. +> **SECONDARY:** It's important to note that the CPU and memory values used in the examples are for demonstration purposes and should be adjusted based on individual system requirements. To find the list of all applicable containers for a component, please refer to its specification. -```yaml -apiVersion: projectcalico.org/v3 +## APIServer custom resource[​](#apiserver-custom-resource) -kind: StagedKubernetesNetworkPolicy +The [APIServer](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#apiserver) CR provides a way to configure APIServerDeployment. The following sections provide example configurations for this CR. -metadata: +### APIServerDeployment[​](#apiserverdeployment) - name: test-network-policy +To configure resource specification for the [APIServerDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#apiserverdeployment), patch the APIServer CR using the below command: - namespace: default +```bash +kubectl patch apiserver tigera-secure --type=merge --patch='{"spec": {"apiServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-apiserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-queryserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` -spec: +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). - podSelector: +#### Verification[​](#verification) - matchLabels: +You can verify the configured resources using the following command: - role: db +```bash +kubectl get deployment.apps/calico-apiserver -n calico-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` - policyTypes: +This command will output the configured resource requests and limits for the Calico APIServerDeployment component in JSON format. - - Ingress +```bash +{ - - Egress + "name": "calico-apiserver", - ingress: + "resources": { - - from: + "limits": { - - ipBlock: + "cpu": "1", - cidr: 172.17.0.0/16 + "memory": "1000Mi" - except: + }, - - 172.17.1.0/24 + "requests": { - - namespaceSelector: + "cpu": "100m", - matchLabels: + "memory": "100Mi" - project: myproject + } - - podSelector: + } - matchLabels: +} - role: frontend +{ - ports: + "name": "tigera-queryserver", - - protocol: TCP + "resources": { - port: 6379 + "limits": { - egress: + "cpu": "1", - - to: + "memory": "1000Mi" - - ipBlock: + }, - cidr: 10.0.0.0/24 + "requests": { - ports: + "cpu": "100m", - - protocol: TCP + "memory": "100Mi" - port: 5978 -``` + } -## Definition[​](#definition) + } -See the [Kubernetes NetworkPolicy documentation](https://v1-21.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#networkpolicyspec-v1-networking-k8s-io) for more information. +} +``` -### Staged network policy +## ApplicationLayer custom resource[​](#applicationlayer-custom-resource) - +The [ApplicationLayer](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#applicationlayer) CR provides a way to configure resources for L7LogCollectorDaemonSet. The following sections provide example configurations for this CR. - +### L7LogCollectorDaemonSet[​](#l7logcollectordaemonset) - +To configure resource specification for the [L7LogCollectorDaemonSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#l7logcollectordaemonset), patch the ApplicationLayer CR using the below command: - +```bash +kubectl patch applicationlayer tigera-secure --type=merge --patch='{"spec": {"l7LogCollectorDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"l7-collector","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"envoy-proxy","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` - +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). - +#### Verification[​](#verification-1) - +You can verify the configured resources using the following command: -A staged network policy resource (`StagedNetworkPolicy`) represents an ordered set of rules which are applied to a collection of endpoints that match a [label selector](#selector). These rules are used to preview network behavior and do not enforce network traffic. For enforcing network traffic, see [network policy resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy). +```bash +kubectl get daemonset.apps/l7-log-collector -n calico-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -`StagedNetworkPolicy` is a namespaced resource. `StagedNetworkPolicy` in a specific namespace only applies to [workload endpoint resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint) in that namespace. Two resources are in the same namespace if the `namespace` value is set the same on both. See [staged global network policy resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/stagedglobalnetworkpolicy) for staged non-namespaced network policy. +This command will output the configured resource requests and limits for the Calico L7LogCollectorDaemonSet component in JSON format. -`StagedNetworkPolicy` resources can be used to define network connectivity rules between groups of Calico Enterprise endpoints and host endpoints, and take precedence over [profile resources](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) if any are defined. +```bash +{ -StagedNetworkPolicies are organized into [tiers](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), which provide an additional layer of ordering—in particular, note that the `Pass` action skips to the next [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier), to enable hierarchical security policy. + "name": "envoy-proxy", -For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `stagednetworkpolicy.projectcalico.org`, `stagednetworkpolicies.projectcalico.org` and abbreviations such as `stagednetworkpolicy.p` and `stagednetworkpolicies.p`. + "resources": { -## Sample YAML[​](#sample-yaml) + "limits": { -This sample policy allows TCP traffic from `frontend` endpoints to port 6379 on `database` endpoints. + "cpu": "1", -```yaml -apiVersion: projectcalico.org/v3 + "memory": "1000Mi" -kind: StagedNetworkPolicy + }, -metadata: + "requests": { - name: internal-access.allow-tcp-6379 + "cpu": "100m", - namespace: production + "memory": "100Mi" -spec: + } - tier: internal-access + } - selector: role == 'database' +} - types: +{ - - Ingress + "name": "l7-collector", - - Egress + "resources": { - ingress: + "limits": { - - action: Allow + "cpu": "1", - protocol: TCP + "memory": "1000Mi" - source: + }, - selector: role == 'frontend' + "requests": { - destination: + "cpu": "100m", - ports: + "memory": "100Mi" - - 6379 + } - egress: + } - - action: Allow +} ``` -## Definition[​](#definition) +## Authentication custom resource[​](#authentication-custom-resource) -### Metadata[​](#metadata) +The [Authentication](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#authentication) CR provides a way to configure resources for DexDeployment. The following sections provide example configurations for this CR. -| Field | Description | Accepted Values | Schema | Default | -| --------- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ | --------- | -| name | The name of the network policy. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | | -| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" | +### DexDeployment[​](#dexdeployment) -### Spec[​](#spec) +To configure resource specification for the [DexDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#dexdeployment), patch the Authentication CR using the below command: -| Field | Description | Accepted Values | Schema | Default | -| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------- | --------------------- | --------------------------------------------- | -| order | Controls the order of precedence. Calico Enterprise applies the policy with the lowest value first. | | float | | -| tier | Name of the [tier](https://docs.tigera.io/calico-enterprise/latest/reference/resources/tier) this policy belongs to. | | string | `default` | -| selector | Selects the endpoints to which this policy applies. | | [selector](#selector) | all() | -| types | Applies the policy based on the direction of the traffic. To apply the policy to inbound traffic, set to `Ingress`. To apply the policy to outbound traffic, set to `Egress`. To apply the policy to both, set to `Ingress, Egress`. | `Ingress`, `Egress` | List of strings | Depends on presence of ingress/egress rules\* | -| ingress | Ordered list of ingress rules applied by policy. | | List of [Rule](#rule) | | -| egress | Ordered list of egress rules applied by this policy. | | List of [Rule](#rule) | | -| serviceAccountSelector | Selects the service account(s) to which this policy applies. Select a specific service account by name using the `projectcalico.org/name` label. | | [selector](#selector) | all() | -| performanceHints | Contains a list of hints to Calico's policy engine to help process the policy more efficiently. Hints never change the enforcement behaviour of the policy. The available hints are described [below](#performance-hints). | `AssumeNeededOnEveryNode` | List of strings | | +```bash +kubectl patch authentication tigera-secure --type=merge --patch='{"spec": {"dexDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-dex","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` -\* If `types` has no value, Calico Enterprise defaults as follows. +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -> | Ingress Rules Present | Egress Rules Present | `Types` value | -> | --------------------- | -------------------- | ----------------- | -> | No | No | `Ingress` | -> | Yes | No | `Ingress` | -> | No | Yes | `Egress` | -> | Yes | Yes | `Ingress, Egress` | +#### Verification[​](#verification-2) -### Rule[​](#rule) +You can verify the configured resources using the following command: -A single rule matches a set of packets and applies some action to them. When multiple rules are specified, they are executed in order. +```bash +kubectl get deployment.apps/tigera-dex -n tigera-dex -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -| Field | Description | Accepted Values | Schema | Default | -| ----------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------ | ----------------------------- | ------- | -| metadata | Per-rule metadata. | | [RuleMetadata](#rulemetadata) | | -| action | Action to perform when matching this rule. | `Allow`, `Deny`, `Log`, `Pass` | string | | -| protocol | Positive protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | -| notProtocol | Negative protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | | -| icmp | ICMP match criteria. | | [ICMP](#icmp) | | -| notICMP | Negative match on ICMP. | | [ICMP](#icmp) | | -| ipVersion | Positive IP version match. | `4`, `6` | integer | | -| source | Source match parameters. | | [EntityRule](#entityrule) | | -| destination | Destination match parameters. | | [EntityRule](#entityrule) | | -| http | Match HTTP request parameters. Application layer policy must be enabled to use this field. | | [HTTPMatch](#httpmatch) | | +This command will output the configured resource requests and limits for the Calico DexDeployment component in JSON format. -After a `Log` action, processing continues with the next rule; `Allow` and `Deny` are immediate and final and no further rules are processed. +```bash +{ -An `action` of `Pass` in a `NetworkPolicy` or `GlobalNetworkPolicy` will skip over the remaining policies and jump to the first [profile](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) assigned to the endpoint, applying the policy configured in the profile; if there are no Profiles configured for the endpoint the default applied action is `Deny`. + "name": "tigera-dex", -### RuleMetadata[​](#rulemetadata) + "resources": { -Metadata associated with a specific rule (rather than the policy as a whole). The contents of the metadata does not affect how a rule is interpreted or enforced; it is simply a way to store additional information for use by operators or applications that interact with Calico Enterprise. + "limits": { -| Field | Description | Schema | Default | -| ----------- | ----------------------------------- | ----------------------- | ------- | -| annotations | Arbitrary non-identifying metadata. | map of string to string | | + "cpu": "1", -Example: + "memory": "1000Mi" -```yaml -metadata: + }, - annotations: + "requests": { - app: database + "cpu": "100m", - owner: devops -``` + "memory": "100Mi" -Annotations follow the [same rules as Kubernetes for valid syntax and character set](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set). + } -On Linux with the iptables data plane, rule annotations are rendered as comments in the form `-m comment --comment "="` on the iptables rule(s) that correspond to the Calico Enterprise rule. + } -### ICMP[​](#icmp) +} +``` -| Field | Description | Accepted Values | Schema | Default | -| ----- | ------------------- | -------------------- | ------- | ------- | -| type | Match on ICMP type. | Can be integer 0-254 | integer | | -| code | Match on ICMP code. | Can be integer 0-255 | integer | | +## Compliance custom resource[​](#compliance-custom-resource) -### EntityRule[​](#entityrule) +The [Compliance](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#compliance) CR provides a way to configure resources for ComplianceControllerDeployment, ComplianceSnapshotterDeployment, ComplianceBenchmarkerDaemonSet, ComplianceServerDeployment, ComplianceReporterPodTemplate. The following sections provide example configurations for this CR. -Entity rules specify the attributes of the source or destination of a packet that must match for the rule as a whole to match. Packets can be matched on combinations of: +Example Configurations: -- Identity of the source/destination, by using [Selectors](#selectors) or by specifying a particular Kubernetes `Service`. Selectors can match [workload endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/workloadendpoint), [host endpoints](https://docs.tigera.io/calico-enterprise/latest/reference/resources/hostendpoint) and ([namespaced](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkset) or [global](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkset)) network sets. -- Source/destination IP address, protocol and port. +### ComplianceControllerDeployment[​](#compliancecontrollerdeployment) -If the rule contains multiple match criteria (for example, an IP and a port) then all match criteria must match for the rule as a whole to match a packet. +To configure resource specification for the [ComplianceControllerDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#compliancecontrollerdeployment), patch the Compliance CR using the below command: -| Field | Description | Accepted Values | Schema | Default | -| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------------------------------------------- | ------- | -| nets | Match packets with IP in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs | | -| notNets | Negative match on CIDRs. Match packets with IP not in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs | | -| selector | Positive match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | | -| notSelector | Negative match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | | -| namespaceSelector | Positive match on selected namespaces. If specified, only workload endpoints in the selected Kubernetes namespaces are matched. Matches namespaces based on the labels that have been applied to the namespaces. Defines the scope that selectors will apply to, if not defined then selectors apply to the NetworkPolicy's namespace. Match a specific namespace by name using the `projectcalico.org/name` label. Select the non-namespaced resources like GlobalNetworkSet(s), host endpoints to which this policy applies by using `global()` selector. | Valid selector | [selector](#selector) | | -| ports | Positive match on the specified ports | | list of [ports](#ports) | | -| domains | Positive match on [domain names](#exact-and-wildcard-domain-names). | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list of strings | | -| notPorts | Negative match on the specified ports | | list of [ports](#ports) | | -| serviceAccounts | Match endpoints running under service accounts. If a `namespaceSelector` is also defined, the set of service accounts this applies to is limited to the service accounts in the selected namespaces. | | [ServiceAccountMatch](#serviceaccountmatch) | | -| services | Match the specified service(s). If specified on egress rule destinations, no other selection criteria can be set. If specified on ingress rule sources, only positive or negative matches on ports can be specified. | | [ServiceMatch](#servicematch) | | +```bash +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceControllerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-controller","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` -> **SECONDARY:** You cannot mix IPv4 and IPv6 CIDRs in a single rule using `nets` or `notNets`. If you need to match both, create 2 rules. +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -#### Selector performance in EntityRules[​](#selector-performance-in-entityrules) +#### Verification[​](#verification-3) -When rendering policy into the data plane, Calico Enterprise must identify the endpoints that match the selectors in all active rules. This calculation is optimized for certain common selector types. Using the optimized selector types reduces CPU usage (and policy rendering time) by orders of magnitude. This becomes important at high scale (hundreds of active rules, hundreds of thousands of endpoints). +You can verify the configured resources using the following command: -The optimized operators are as follows: +```bash +kubectl get deployment.apps/compliance-controller -n tigera-compliance -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -- `label == "value"` -- `label in { 'v1', 'v2' }` -- `has(label)` -- ` && ` is optimized if **either** `` or `` is optimized. +This command will output the configured resource requests and limits for the ComplianceControllerDeployment component in JSON format. -The following perform like `has(label)`. All endpoints with the label will be scanned to find matches: +```bash +{ -- `label contains 's'` -- `label starts with 's'` -- `label ends with 's'` + "name": "compliance-controller", -The other operators, and in particular, `all()`, `!`, `||` and `!=` are not optimized. + "resources": { -Examples: + "limits": { -- `a == 'b'` - optimized -- `a == 'b' && has(c)` - optimized -- `a == 'b' || has(c)` - **not** optimized due to use of `||` -- `c != 'd'` - **not** optimized due to use of `!=` -- `!has(a)` - **not** optimized due to use of `!` -- `a == 'b' && c != 'd'` - optimized, `a =='b'` is optimized so `a == 'b' && ` is optimized. -- `c != 'd' && a == 'b'` - optimized, `a =='b'` is optimized so ` && a == 'b'` is optimized. + "cpu": "1", -### Exact and wildcard domain names[​](#exact-and-wildcard-domain-names) + "memory": "1000Mi" -The `domains` field is only valid for egress Allow rules. It restricts the rule to apply only to traffic to one of the specified domains. If this field is specified, the parent [Rule](#rule)'s `action` must be `Allow`, and `nets` and `selector` must both be left empty. + }, -When a configured domain name has no wildcard (`*`), it matches exactly that domain name. For example: + "requests": { -- `microsoft.com` -- `tigera.io` + "cpu": "100m", -With a single asterisk in any part of the domain name, it matches 1 or more path components at that position. For example: + "memory": "100Mi" -- `*.google.com` matches `www.google.com` and `www.ipv6.google.com`, but not `google.com` -- `www.*.com` matches `www.sun.com` and `www.apple.com`, but not `www.com` -- `update.*.mycompany.com` matches `update.tools.mycompany.com`, `update.secure.suite.mycompany.com`, and so on + } -**Not** supported are: + } -- Multiple wildcards in the same domain, for example: `*.*.mycompany.com` -- Asterisks that are not the entire component, for example: `www.g*.com` -- A wildcard as the last component, for example: `www.mycompany.*` -- More general wildcards, such as regular expressions +} +``` -> **SECONDARY:** Calico Enterprise implements policy for domain names by learning the corresponding IPs from DNS, then programming rules to allow those IPs. This means that if multiple domain names A, B and C all map to the same IP, and there is domain-based policy to allow A, traffic to B and C will be allowed as well. +### ComplianceSnapshotterDeployment[​](#compliancesnapshotterdeployment) -### Selector[​](#selector) +To configure resource specification for the [ComplianceSnapshotterDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#compliancesnapshotterdeployment), patch the Compliance CR using the below command: -A label selector is an expression which either matches or does not match a resource based on its labels. +```bash +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceSnapshotterDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-snapshotter","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` -Calico Enterprise label selectors support a number of operators, which can be combined into larger expressions using the boolean operators and parentheses. +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -| Expression | Meaning | -| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| **Logical operators** | | -| `( )` | Matches if and only if `` matches. (Parentheses are used for grouping expressions.) | -| `! ` | Matches if and only if `` does not match. **Tip:** `!` is a special character at the start of a YAML string, if you need to use `!` at the start of a YAML string, enclose the string in quotes. | -| ` && ` | "And": matches if and only if both ``, and, `` matches | -| ` \|\| ` | "Or": matches if and only if either ``, or, `` matches. | -| **Match operators** | | -| `all()` | Match all in-scope resources. To match *no* resources, combine this operator with `!` to form `!all()`. | -| `global()` | Match all non-namespaced resources. Useful in a `namespaceSelector` to select global resources such as global network sets. | -| `k == 'v'` | Matches resources with the label 'k' and value 'v'. | -| `k != 'v'` | Matches resources without label 'k' or with label 'k' and value *not* equal to `v` | -| `has(k)` | Matches resources with label 'k', independent of value. To match pods that do not have label `k`, combine this operator with `!` to form `!has(k)` | -| `k in { 'v1', 'v2' }` | Matches resources with label 'k' and value in the given set | -| `k not in { 'v1', 'v2' }` | Matches resources without label 'k' or with label 'k' and value *not* in the given set | -| `k contains 's'` | Matches resources with label 'k' and value containing the substring 's' | -| `k starts with 's'` | Matches resources with label 'k' and value starting with the substring 's' | -| `k ends with 's'` | Matches resources with label 'k' and value ending with the substring 's' | +#### Verification[​](#verification-4) -Operators have the following precedence: +You can verify the configured resources using the following command: -- **Highest**: all the match operators -- Parentheses `( ... )` -- Negation with `!` -- Conjunction with `&&` -- **Lowest**: Disjunction with `||` +```bash +kubectl get deployment.apps/compliance-snapshotter -n tigera-compliance -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -For example, the expression +This command will output the configured resource requests and limits for the ComplianceSnapshotterDeployment in JSON format. -```text -! has(my-label) || my-label starts with 'prod' && role in {'frontend','business'} -``` +```bash +{ -Would be "bracketed" like this: + "name": "compliance-snapshotter", -```text -((!(has(my-label)) || ((my-label starts with 'prod') && (role in {'frontend','business'})) -``` + "resources": { -It would match: + "limits": { -- Any resource that did not have label "my-label". + "cpu": "1", -- Any resource that both: + "memory": "1000Mi" - + }, - - Has a value for `my-label` that starts with "prod", and, - - Has a role label with value either "frontend", or "business". + "requests": { -### Ports[​](#ports) + "cpu": "100m", -Calico Enterprise supports the following syntaxes for expressing ports. + "memory": "100Mi" -| Syntax | Example | Description | -| --------- | ---------- | ------------------------------------------------------------------- | -| int | 80 | The exact (numeric) port specified | -| start:end | 6040:6050 | All (numeric) ports within the range start ≤ x ≤ end | -| string | named-port | A named port, as defined in the ports list of one or more endpoints | + } -An individual numeric port may be specified as a YAML/JSON integer. A port range or named port must be represented as a string. For example, this would be a valid list of ports: + } -```yaml -ports: [8080, '1234:5678', 'named-port'] +} ``` -#### Named ports[​](#named-ports) - -Using a named port in an `EntityRule`, instead of a numeric port, gives a layer of indirection, allowing for the named port to map to different numeric values for each endpoint. +### ComplianceBenchmarkerDaemonSet[​](#compliancebenchmarkerdaemonset) -For example, suppose you have multiple HTTP servers running as workloads; some exposing their HTTP port on port 80 and others on port 8080. In each workload, you could create a named port called `http-port` that maps to the correct local port. Then, in a rule, you could refer to the name `http-port` instead of writing a different rule for each type of server. +To configure resource specification for the [ComplianceBenchmarkerDaemonSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#compliancebenchmarkerdaemonset), patch the Compliance CR using the below command: -> **SECONDARY:** Since each named port may refer to many endpoints (and Calico Enterprise has to expand a named port into a set of endpoint/port combinations), using a named port is considerably more expensive in terms of CPU than using a simple numeric port. We recommend that they are used sparingly, only where the extra indirection is required. +```bash +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceBenchmarkerDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-benchmarker","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` -### ServiceAccountMatch[​](#serviceaccountmatch) +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -A ServiceAccountMatch matches service accounts in an EntityRule. +#### Verification[​](#verification-5) -| Field | Description | Schema | -| -------- | ------------------------------- | --------------------- | -| names | Match service accounts by name | list of strings | -| selector | Match service accounts by label | [selector](#selector) | +You can verify the configured resources using the following command: -### ServiceMatch[​](#servicematch) +```bash +kubectl get daemonset.apps/compliance-benchmarker -n tigera-compliance -o json |jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -A ServiceMatch matches a service in an EntityRule. +```bash +{ -| Field | Description | Schema | -| --------- | ------------------------ | ------ | -| name | The service's name. | string | -| namespace | The service's namespace. | string | + "name": "compliance-benchmarker", -### Performance Hints[​](#performance-hints) + "resources": { -Performance hints provide a way to tell Calico Enterprise about the intended use of the policy so that it may process it more efficiently. Currently only one hint is defined: + "limits": { -- `AssumeNeededOnEveryNode`: normally, Calico Enterprise only calculates a policy's rules and selectors on nodes where the policy is actually in use (i.e. its selector matches a local endpoint). This saves work in most cases. The `AssumeNeededOnEveryNode` hint tells Calico Enterprise to treat the policy as "in use" on *every* node. This is useful for large policy sets that are known to apply to all (or nearly all) endpoints. It effectively "preloads" the policy on every node so that there is less work to do when the first endpoint matching the policy shows up. It also prevents work from being done to tear down the policy when the last endpoint is drained. + "cpu": "1", -## Supported operations[​](#supported-operations) + "memory": "1000Mi" -| Datastore type | Create/Delete | Update | Get/List | Notes | -| ------------------------ | ------------- | ------ | -------- | ----- | -| Kubernetes API datastore | Yes | Yes | Yes | | + }, -#### List filtering on tiers[​](#list-filtering-on-tiers) + "requests": { -List and watch operations may specify label selectors or field selectors to filter `StagedNetworkPolicy` resources on tiers returned by the API server. When no selector is specified, the API server returns all `StagedNetworkPolicy` resources from all tiers that the user has access to. + "cpu": "100m", -##### Field selector[​](#field-selector) + "memory": "100Mi" -When using the field selector, supported operators are `=` and `==` + } -The following example shows how to retrieve all `StagedNetworkPolicy` resources in the default tier and in all namespaces: + } -```bash -kubectl get stagednetworkpolicy.p --field-selector spec.tier=default --all-namespaces +} ``` -##### Label selector[​](#label-selector) +This command will output the configured resource requests and limits for the ComplianceBenchmarkerDaemonSet in JSON format. -When using the label selector, supported operators are `=`, `==` and `IN`. +### ComplianceServerDeployment[​](#complianceserverdeployment) -The following example shows how to retrieve all `StagedNetworkPolicy` resources in the `default` and `net-sec` tiers and in all namespaces: +To configure resource specification for the [ComplianceServerDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#complianceserverdeployment), patch the Compliance CR using the below command: ```bash -kubectl get stagednetworkpolicy.p -l 'projectcalico.org/tier in (default, net-sec)' --all-namespaces +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-server","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` -### Tier +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -A tier resource (`Tier`) represents an ordered collection of [NetworkPolicies](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) and/or [GlobalNetworkPolicies](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy). Tiers are used to divide these policies into groups of different priorities. These policies are ordered within a Tier: the additional hierarchy of Tiers provides more flexibility because the `Pass` `action` in a Rule jumps to the next Tier. Some example use cases for this are. +#### Verification[​](#verification-6) -- Allowing privileged users to define security policy that takes precedence over other users. -- Translating hierarchies of physical firewalls directly into Calico Enterprise policy. +You can verify the configured resources using the following command: -For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI: `tier.projectcalico.org`, `tiers.projectcalico.org` and abbreviations such as `tier.p` and `tiers.p`. +```bash +kubectl get deployment.apps/compliance-server -n tigera-compliance -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -## How policy is evaluated[​](#how-policy-is-evaluated) +This command will output the configured resource requests and limits for the ComplianceServerDeployment in JSON format. -When a new connection is processed by Calico Enterprise, each tier that contains a policy that applies to the endpoint processes the packet. Tiers are sorted by their `order` - smallest number first. +```bash +{ -Policies in each Tier are then processed in order. + "name": "compliance-server", -- If a [NetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) or [GlobalNetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy) in the Tier `Allow`s or `Deny`s the packet, then evaluation is done: the packet is handled accordingly. -- If a [NetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) or [GlobalNetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/globalnetworkpolicy) in the Tier `Pass`es the packet, the next Tier containing a Policy that applies to the endpoint processes the packet. + "resources": { -If the Tier applies to the endpoint, but takes no action on the packet the packet is dropped. This behaviour can be changed by setting the `defaultAction` of a tier to `Pass`. + "limits": { -If the last Tier applying to the endpoint `Pass`es the packet, that endpoint's [Profiles](https://docs.tigera.io/calico-enterprise/latest/reference/resources/profile) are evaluated. + "cpu": "1", -## Sample YAML[​](#sample-yaml) + "memory": "1000Mi" -```yaml -apiVersion: projectcalico.org/v3 + }, -kind: Tier + "requests": { -metadata: + "cpu": "100m", - name: internal-access + "memory": "100Mi" -spec: + } - order: 100 + } - defaultAction: Deny +} ``` -## Definition[​](#definition) +### ComplianceReporterPodTemplate.[​](#compliancereporterpodtemplate) -### Metadata[​](#metadata) +To configure resource specification for the [ComplianceReporterPodTemplate](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#compliancereporterpodtemplate), patch the Compliance CR using the below command: -| Field | Description | Accepted Values | Schema | -| ----- | --------------------- | --------------- | ------ | -| name | The name of the tier. | | string | +```bash +kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceReporterPodTemplate": {"template": {"spec": {"containers":[{"name":"reporter","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}' +``` -### Spec[​](#spec) +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -| Field | Description | Accepted Values | Schema | Default | -| ------------- | ------------------------------------------------------------------------------------------------------------------------------------ | --------------- | ------ | --------------------- | -| order | (Optional) Indicates priority of this Tier, with lower order taking precedence. No value indicates highest order (lowest precedence) | | float | `nil` (highest order) | -| defaultAction | (Optional) Indicates the default action, when this Tier applies to an endpoint, but takes not action on the packet | `Deny`, `Pass` | string | `Deny` | +#### Verification[​](#verification-7) -All Policies created by Calico Enterprise orchestrator integrations are created in the default (last) Tier. +You can verify the configured resources using the following command: -### Workload endpoint +```bash +kubectl get Podtemplates tigera.io.report -n tigera-compliance -o json | jq '.template.spec.containers[] | {name: .name, resources: .resources}' +``` - +This command will output the configured resource requests and limits for the ComplianceReporterPodTemplate component in JSON format. -A workload endpoint resource (`WorkloadEndpoint`) represents an interface connecting a Calico Enterprise networked container or VM to its host. +```bash +{ -Each endpoint may specify a set of labels and list of profiles that Calico Enterprise will use to apply policy to the interface. + "name": "reporter", -A workload endpoint is a namespaced resource, that means a [NetworkPolicy](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy) in a specific namespace only applies to the WorkloadEndpoint in that namespace. Two resources are in the same namespace if the namespace value is set the same on both. + "resources": { -This resource is not supported in `kubectl`. + "limits": { -> **SECONDARY:** While `calicoctl` allows the user to fully manage Workload Endpoint resources, the lifecycle of these resources is generally handled by an orchestrator-specific plugin such as the Calico Enterprise CNI plugin. In general, we recommend that you only use `calicoctl` to view this resource type. + "cpu": "1", -**Multiple networks** + "memory": "1000Mi" -If multiple networks are enabled, workload endpoints will have additional labels which can be used in network policy selectors: + }, -- `projectcalico.org/network`: The name of the network specified in the NetworkAttachmentDefinition. -- `projectcalico.org/network-namespace`: This namespace the network is in. -- `projectcalico.org/network-interface`: The network interface for the workload endpoint. + "requests": { -For more information, see the [multiple-networks how-to guide](https://docs.tigera.io/calico-enterprise/latest/networking/configuring/multiple-networks). + "cpu": "100m", -## Sample YAML[​](#sample-yaml) + "memory": "100Mi" -```yaml -apiVersion: projectcalico.org/v3 + } -kind: WorkloadEndpoint + } -metadata: +} +``` - name: node1-k8s-my--nginx--b1337a-eth0 +## Installation custom resource[​](#installation-custom-resource) - namespace: default +The [Installation CR](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api) provides a way to configure resources for various Calico Enterprise components, including TyphaDeployment, calicoNodeDaemonSet, CalicoNodeWindowsDaemonSet, csiNodeDriverDaemonSet and KubeControllersDeployment. The following sections provide example configurations for this CR. - labels: +### TyphaDeployment[​](#typhadeployment) - app: frontend +To configure resource specification for the [TyphaDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#typhadeployment), patch the installation CR using the below command: - projectcalico.org/namespace: default +```bash +kubectl patch installations default --type=merge --patch='{"spec": {"typhaDeployment": {"spec": {"template": {"spec": {"containers": [{"name": "calico-typha", "resources": {"requests": {"cpu": "100m", "memory": "100Mi"}, "limits": {"cpu": "1", "memory": "1000Mi"}}}]}}}}}}' +``` - projectcalico.org/orchestrator: k8s +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -spec: +#### Verification[​](#verification-8) - node: node1 +You can verify the configured resources using the following command: - orchestrator: k8s +```bash +kubectl get deployment.apps/calico-typha -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` - endpoint: eth0 +This command will output the configured resource requests and limits for the Calico TyphaDeployment component in JSON format. - containerID: 1337495556942031415926535 +```bash +{ - pod: my-nginx-b1337a + "name": "calico-typha", - endpoint: eth0 + "resources": { - interfaceName: cali0ef24ba + "limits": { - mac: ca:fe:1d:52:bb:e9 + "cpu": "1", - ipNetworks: + "memory": "1000Mi" - - 192.168.0.0/32 + }, - profiles: + "requests": { - - profile1 + "cpu": "100m", - ports: + "memory": "100Mi" - - name: some-port + } - port: 1234 + } - protocol: TCP +} +``` - - name: another-port +### CalicoNodeDaemonSet[​](#caliconodedaemonset) - port: 5432 +To configure resource requests for the [calicoNodeDaemonSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#caliconodedaemonset) component, patch the installation CR using the below command: - protocol: UDP +```bash +kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' ``` -## Definitions[​](#definitions) +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -### Metadata[​](#metadata) +#### Verification[​](#verification-9) -| Field | Description | Accepted Values | Schema | Default | -| --------- | ------------------------------------------------------------------ | -------------------------------------------------- | ------ | --------- | -| name | The name of this workload endpoint resource. Required. | Alphanumeric string with optional `.`, `_`, or `-` | string | | -| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" | -| labels | A set of labels to apply to this endpoint. | | map | | +You can verify the configured resources using the following command: -### Spec[​](#spec) +```bash +kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` -| Field | Description | Accepted Values | Schema | Default | -| ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | ---------------------------------------------- | ------- | -| workload | The name of the workload to which this endpoint belongs. | | string | | -| orchestrator | The orchestrator that created this endpoint. | | string | | -| node | The node where this endpoint resides. | | string | | -| containerID | The CNI CONTAINER\_ID of the workload endpoint. | | string | | -| pod | Kubernetes pod name for this workload endpoint. | | string | | -| endpoint | Container network interface name. | | string | | -| ipNetworks | The CIDRs assigned to the interface. | | List of strings | | -| ipNATs | List of 1:1 NAT mappings to apply to the endpoint. | | List of [IPNATs](#ipnat) | | -| awsElasticIPs | List of AWS Elastic IP addresses that should be considered for this workload; only used for workloads in an AWS-backed IP pool. This should be set via the `cni.projectcalico.org/awsElasticIPs` Pod annotation. | | List of valid IP addresses | | -| ipv4Gateway | The gateway IPv4 address for traffic from the workload. | | string | | -| ipv6Gateway | The gateway IPv6 address for traffic from the workload. | | string | | -| profiles | List of profiles assigned to this endpoint. | | List of strings | | -| interfaceName | The name of the host-side interface attached to the workload. | | string | | -| mac | The source MAC address of traffic generated by the workload. | | IEEE 802 MAC-48, EUI-48, or EUI-64 | | -| ports | List on named ports that this workload exposes. | | List of [WorkloadEndpointPorts](#endpointport) | | +This command will output the configured resource requests and limits for the Calico calicoNodeDaemonSet component in JSON format. -### IPNAT[​](#ipnat) +```bash +{ -IPNAT contains a single NAT mapping for a WorkloadEndpoint resource. + "name": "calico-node", -| Field | Description | Accepted Values | Schema | Default | -| ---------- | ------------------------------------------- | ------------------ | ------ | ------- | -| internalIP | The internal IP address of the NAT mapping. | A valid IP address | string | | -| externalIP | The external IP address. | A valid IP address | string | | + "resources": { -### EndpointPort[​](#endpointport) + "limits": { -A WorkloadEndpointPort associates a name with a particular TCP/UDP/SCTP port of the endpoint, allowing it to be referenced as a named port in [policy rules](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy#entityrule). + "cpu": "1", -| Field | Description | Accepted Values | Schema | Default | -| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------- | ------ | ------- | -| name | The name to attach to this port, allowing it to be referred to in [policy rules](https://docs.tigera.io/calico-enterprise/latest/reference/resources/networkpolicy#entityrule). Names must be unique within an endpoint. | | string | | -| protocol | The protocol of this named port. | `TCP`, `UDP`, `SCTP` | string | | -| port | The workload port number. | `1`-`65535` | int | | -| hostPort | Port on the host that is forwarded to this port. | `1`-`65535` | int | | -| hostIP | IP address on the host on which the hostPort is accessible. | `1`-`65535` | int | | + "memory": "1000Mi" -> **SECONDARY:** On their own, WorkloadEndpointPort entries don't result in any change to the connectivity of the port. They only have an effect if they are referred to in policy. + }, -> **SECONDARY:** The hostPort and hostIP fields are read-only and determined from Kubernetes hostPort configuration. These fields are used only when host ports are enabled in Calico. + "requests": { -## Supported operations[​](#supported-operations) + "cpu": "100m", -| Datastore type | Create/Delete | Update | Get/List | Notes | -| --------------------- | ------------- | ------ | -------- | -------------------------------------------------------- | -| Kubernetes API server | No | Yes | Yes | WorkloadEndpoints are directly tied to a Kubernetes pod. | + "memory": "100Mi" -### Architecture + } - + } -## [📄️Component architecture](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/overview) +} +``` -[Understand the Calico Enterprise components and the basics of BGP networking.](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/overview) +### calicoNodeWindowsDaemonSet[​](#caliconodewindowsdaemonset) -## [📄️'The Calico Enterprise data path: IP routing and iptables'](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/data-path) +To configure resource requests for the [calicoNodeWindowsDaemonSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#caliconodewindowsdaemonset) component, patch the installation CR using the below command: -[Learn how packets flow between workloads in a datacenter, or between a workload and the internet.](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/data-path) +```bash +kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeWindowsDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node-windows","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' +``` -## [🗃Network design](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/) +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -[2 items](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/) +#### Verification[​](#verification-10) -### Component architecture +You can verify the configured resources using the following command: - +```bash +kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` -## About Calico Enterprise architecture[​](#about-calico-enterprise-architecture) +This command will output the configured resource requests and limits for the Calico calicoNodeWindowsDaemonSet component in JSON format. -The following diagram shows the components that comprise a Kubernetes on-premises deployment using the Calico Enterprise CNI for networking and network policy. +```bash +{ -**Tip**: For best visibility, right-click on the image below and select "Open image in new tab" + "name": "calico-node", -![Architecture](https://docs.tigera.io/img/calico-enterprise/architecture-ee-new.svg) + "resources": { -Calico open-source components are the foundation of Calico Enterprise. Calico Enterprise provides value-added components for visibility and troubleshooting, compliance, policy lifecycle management, threat detection, and multi-cluster management. + "limits": { -## Calico Enterprise components[​](#calico-enterprise-components) + "cpu": "1", -- [calicoq](#calicoq) -- [Compliance](#compliance) -- [Linseed API and ES gateway](#linseed-api-and-es-gateway) -- [Intrusion detection](#intrusion-detection) -- [kube-controllers](#kube-controllers) -- [Manager](#manager) -- [Packet capture API](#packet-capture-api) -- [Prometheus API service](#prometheus-api-service) + "memory": "1000Mi" -## Bundled third-party components[​](#bundled-third-party-components) + }, -- [fluentd](#fluentd) -- [Elasticsearch and Kibana](#elasticsearch-and-kibana) -- [Prometheus](#prometheus) + "requests": { -## Calico open-source components[​](#calico-open-source-components) + "cpu": "100m", -- [API server](#api-server) -- [Felix](#felix) -- [BIRD](#bird) -- [calicoctl](#calicoctl) -- [calico-node](#calico-node) -- [confd](#confd) -- [CNI plugin](#cni-plugin) -- [Datastore plugin](#datastore-plugin) -- [IPAM plugin](#ipam-plugin) -- [Typha](#typha) + "memory": "100Mi" -## Kubernetes components[​](#kubernetes-components) + } -- [Kubernetes API server](#kubernetes-api-server) -- [kubectl](#kubectl) + } -## Cloud orchestrator plugins (not pictured)[​](#cloud-orchestrator-plugins-not-pictured) +} +``` -Translates the orchestrator APIs for managing networks to the Calico Enterprise data-model and datastore. +### CalicoKubeControllersDeployment[​](#calicokubecontrollersdeployment) -For cloud providers, Calico Enterprise has a separate plugin for each major cloud orchestration platform. This allows Calico Enterprise to tightly bind to the orchestrator, so users can manage the Calico Enterprise network using their orchestrator tools. When required, the orchestrator plugin provides feedback from the Calico Enterprise network to the orchestrator. For example, providing information about Felix liveness, and marking specific endpoints as failed if network setup fails. +To configure resource requests for the [CalicoKubeControllersDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#calicokubecontrollersdeployment) component, patch the installation CR using the below command: -## Calico Enterprise components[​](#calico-enterprise-components-1) +```bash +kubectl patch installations default --type=merge --patch='{"spec": {"calicoKubeControllersDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-kube-controllers","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' +``` -### calicoq[​](#calicoq) +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -**Main task**: A command line tool for policy inspection to ensure policies are configured as intended. For example, you can determine which endpoints a selector or policy matches, or which policies apply to an endpoint. Requires a separate installation. [calicoq](https://docs.tigera.io/calico-enterprise/latest/reference/clis/calicoq/). +#### Verification[​](#verification-11) -### Compliance[​](#compliance) +You can verify the configured resources using the following command: -**Main task**: Generates compliance reports for the Kubernetes cluster. Report are based on archived flow and audit logs for Calico Enterprise resources, plus any audit logs you’ve configured for Kubernetes resources in the Kubernetes API server. Compliance reports provide the following high-level information: +```bash +kubectl get deployment.apps/calico-kube-controllers -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` -- Protection - - - Endpoints explicitly protected using ingress or egress policy +This command will output the configured resource requests and limits for the Calico CalicoKubeControllersDeployment component in JSON format. -- Policies and services +```bash +{ - + "name": "calico-kube-controllers", - - Policies and services associated with endpoints - - Policy audit logs + "resources": { -- Traffic - - - Allowed ingress/egress traffic to/from namespaces, and to/from the internet + "limits": { -Compliance is comprised of these components: + "cpu": "1", -**compliance-snapshotter** + "memory": "1000Mi" -Handles listing of required Kubernetes and Calico Enterprise configuration and pushes snapshots to Elasticsearch. Snapshots give you visibility into configuration changes, and how the cluster-wide configuration has evolved within a reporting interval. + }, -**compliance-reporter** + "requests": { -Handles report generation. Reads configuration history from Elasticsearch and determines time evolution of cluster-wide configuration, including relationships between policies, endpoints, services and networksets. Data is then passed through a zero-trust aggregator to determine the "worst-case outliers" in the reporting interval. + "cpu": "100m", -**compliance-controller** + "memory": "100Mi" -Reads report configuration, and manages creation, deletion, and monitoring of report generation jobs. + } -**compliance-server** + } -Provides the API for listing, downloading, and rendering reports, and RBAC by performing authentication and authorization through the Kubernetes API server. RBAC is determined from the users RBAC for the GlobalReportType and GlobalReport resources. +} -**compliance-benchmarker** +``` -A daemonset that runs checks in the CIS Kubernetes Benchmark on each node so you can see if Kubernetes is securely deployed. +### CSINodeDriverDaemonSet[​](#csinodedriverdaemonset) -### Linseed API and ES gateway[​](#linseed-api-and-es-gateway) +To configure resource requests for the [CSINodeDriverDaemonSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#csinodedriverdaemonset) component, patch the installation CR using the below command: -The Linseed API uses mTLS to connect to clients, and provides an API to access Elasticsearch data. The ES gateway proxies requests to Elasticsearch, and provides backwards-compatibility for managed clusters that run versions before 3.17. +```bash +kubectl patch installations default --type=merge --patch='{"spec": {"csiNodeDriverDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-csi","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}},{"name":"csi-node-driver-registrar","resources":{"requests":{"cpu":"50m", "memory":"50Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' +``` -### Intrusion detection[​](#intrusion-detection) +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -**Main task**: Consists of a controller that handles integrations with threat intelligence feeds and Calico Enterprise custom alerts, and an installer that installs the Kibana dashboards for viewing jobs through the Kibana UI. +#### Verification[​](#verification-12) -### kube-controllers[​](#kube-controllers) +You can verify the configured resources using the following command: -**Main task**: Monitors the Kubernetes API and performs actions based on cluster state. The Calico Enterprise kube-controllers container includes these controllers: +```bash +kubectl get daemonset.apps/csi-node-driver -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' +``` -- Node -- Service -- Federated services -- Authorization -- Managed cluster (for management clusters only) +This command will output the configured resource requests and limits for the Calico calicoNodeDaemonSet component in JSON format. -### Manager[​](#manager) +```bash +{ -**Main task**: Provides network traffic visibility, centralized multi-cluster management, threat-defense troubleshooting, policy lifecycle management, and compliance using a browser-based UI for multiple roles/stakeholders. [Manager](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#manager). + "name": "calico-csi", -### Packet capture API[​](#packet-capture-api) + "resources": { -**Main task**: Retrieves capture files (pcap format) generated by a packet capture for use with network protocol analysis tools like Wireshark. The packet capture feature is installed by default in all cluster types. Packet capture data is visible in the web console, service graph. + "limits": { -### Prometheus API service[​](#prometheus-api-service) + "cpu": "1", -**Main task**: A proxy querying service that checks a user’s token RBAC to validate its scope and forwards the query to the Prometheus monitoring component. + "memory": "1000Mi" -## Bundled third-party components[​](#bundled-third-party-components-1) + }, -### Elasticsearch and Kibana[​](#elasticsearch-and-kibana) + "requests": { -**Main task**: Built-in third-party search-engine and visualization dashboard, which provide logs for visibility into workloads, to troubleshoot Kubernetes clusters. Installed and configured by default. [Elasticsearch](https://docs.tigera.io/calico-enterprise/latest/observability/). + "cpu": "100m", -### fluentd[​](#fluentd) + "memory": "100Mi" -**Main task**: Collects and forwards Calico Enterprise logs (flows, DNS, L7) to Elasticsearch. Open source data collector for unified logging. [fluentd open source](https://www.fluentd.org/). + } -### Prometheus[​](#prometheus) + } -**Main task**: The default monitoring component for collecting Calico Enterprise policy metrics. It can also be used to collect metrics on calico/nodes from Felix. Prometheus is an open-source toolkit for systems monitoring and alerting. [Prometheus metrics](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/prometheus), and [Configure Prometheus](https://docs.tigera.io/calico-enterprise/latest/operations/monitor/). +} -## Calico open-source components[​](#calico-open-source-components-1) +{ -### API server[​](#api-server) + "name": "csi-node-driver-registrar", -**Main task**: Allows users to manage Calico Enterprise resources such as policies and tiers through `kubectl` or the Kubernetes API. `kubectl` has significant advantages over `calicoctl` including: audit logging, RBAC using Kubernetes Roles and RoleBindings, and not needing to provide privileged Kubernetes CRD access to anyone who needs to manage resources. [API server](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#apiserver). + "resources": { -### BIRD[​](#bird) + "limits": { -**Main task**: Gets routes from Felix and distributes to BGP peers on the network for inter-host routing. Runs on each node that hosts a Felix agent. Open source, internet routing daemon. [BIRD](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/configuration#content-main). + "cpu": "1", -The BGP client is responsible for: + "memory": "1000Mi" -- **Route distribution** + }, - When Felix inserts routes into the Linux kernel FIB, the BGP client distributes them to other nodes in the deployment. This ensures efficient traffic routing for the deployment. + "requests": { -- **BGP route reflector configuration** + "cpu": "50m", - BGP route reflectors are often configured for large deployments rather than a standard BGP client. (Standard BGP requires that every BGP client be connected to every other BGP client in a mesh topology, which is difficult to maintain.) For redundancy, you can seamlessly deploy multiple BGP route reflectors. Note that BGP route reflectors are involved only in control of the network: endpoint data does not passes through them. When the Calico Enterprise BGP client advertises routes from its FIB to the route reflector, the route reflector advertises those routes to the other nodes in the deployment. + "memory": "50Mi" -### calicoctl[​](#calicoctl) + } -**Main task**: Command line interface used largely during pre-installation for CRUD operations on Calico Enterprise objects. `kubectl` is the recommended CLI for CRUD operations. calicoctl is available on any host with network access to the Calico Enterprise datastore as either a binary or a container. Requires separate installation. [calicoctl](https://docs.tigera.io/calico-enterprise/latest/reference/clis/calicoctl/)). + } -### calico-node[​](#calico-node) +} +``` -**Main task**: Bundles key components that are required for networking containers with Calico Enterprise: +## IntrusionDetection custom resource[​](#intrusiondetection-custom-resource) -- Felix -- BIRD -- confd +The [IntrusionDetection](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#intrusiondetection) CR provides a way to configure resources for IntrusionDetectionControllerDeployment. The following sections provide example configurations for this CR. -The calico repository contains the Dockerfile for calico-node, along with various configuration files to configure and “glue” these components together. In addition, we use runit for logging (svlogd) and init (runsv) services. [calico-node](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/configuration). +### IntrusionDetectionControllerDeployment.[​](#intrusiondetectioncontrollerdeployment) -### CNI plugin[​](#cni-plugin) +To configure resource specification for the [IntrusionDetectionControllerDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#intrusiondetectioncontrollerdeployment), patch the IntrusionDetection CR using the below command: -**Main task**: Provides Calico Enterprise networking for Kubernetes clusters. +```bash +kubectl patch intrusiondetection tigera-secure --type=merge --patch='{"spec": {"intrusionDetectionControllerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"webhooks-processor","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}},{"name":"controller","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}}]}}}}}}' +``` -The Calico CNI plugin allows you to use Calico networking for any orchestrator that makes use of the CNI networking specification. The Calico binary that presents this API to Kubernetes is called the CNI plugin, and must be installed on every node in the Kubernetes cluster. Configured through the standard [CNI configuration mechanism](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration), and [Calico CNI plugin](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration). +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -### confd[​](#confd) +#### Verification[​](#verification-13) -**Main task**: Monitors Calico Enterprise datastore for changes to BGP configuration and global defaults such as AS number, logging levels, and IPAM information. An open source, lightweight configuration management tool. +You can verify the configured resources using the following command: -Confd dynamically generates BIRD configuration files based on the updates to data in the datastore. When the configuration file changes, confd triggers BIRD to load the new files. [Configure confd](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/configuration#content-main), and [confd project](https://github.com/kelseyhightower/confd). +```bash +kubectl get deployment.apps/intrusion-detection-controller -n tigera-intrusion-detection -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -### Datastore plugin[​](#datastore-plugin) +This command will output the configured resource requests and limits for the IntrusionDetectionControllerDeployment in JSON format. -**Main task**: The datastore for the Calico Enterprise CNI plugin. The Kubernetes API datastore: +```bash +{ -- Is simple to manage because it does not require an extra datastore -- Uses Kubernetes RBAC to control access to Calico resources -- Uses Kubernetes audit logging to generate audit logs of changes to Calico Enterprise resources + "name": "controller", -### Felix[​](#felix) + "resources": { -**Main task**: Programs routes and ACLs, and anything else required on the host to provide desired connectivity for the endpoints on that host. Runs on each machine that hosts endpoints. Runs as an agent daemon. [Felix resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/felixconfig). + "limits": { -Depending on the specific orchestrator environment, Felix is responsible for: + "cpu": "1", -- **Interface management** + "memory": "1000Mi" - Programs information about interfaces into the kernel so the kernel can correctly handle the traffic from that endpoint. In particular, it ensures that the host responds to ARP requests from each workload with the MAC of the host, and enables IP forwarding for interfaces that it manages. It also monitors interfaces to ensure that the programming is applied at the appropriate time. + }, -- **Route programming** + "requests": { - Programs routes to the endpoints on its host into the Linux kernel FIB (Forwarding Information Base). This ensures that packets destined for those endpoints that arrive on at the host are forwarded accordingly. + "cpu": "100m", -- **ACL programming** + "memory": "1000Mi" - Programs ACLs into the Linux kernel to ensure that only valid traffic can be sent between endpoints, and that endpoints cannot circumvent Calico Enterprise security measures. + } -- **State reporting** + } - Provides network health data. In particular, it reports errors and problems when configuring its host. This data is written to the datastore so it visible to other components and operators of the network. +} -> **SECONDARY:** `node` can be run in *policy only mode* where Felix runs without BIRD and confd. This provides policy management without route distribution between hosts, and is used for deployments like managed cloud providers. +{ -### IPAM plugin[​](#ipam-plugin) + "name": "webhooks-processor", -**Main task**: Uses Calico Enterprise’s IP pool resource to control how IP addresses are allocated to pods within the cluster. It is the default plugin used by most Calico Enterprise installations. It is one of the Calico Enterprise [CNI plugins](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration). + "resources": { -### Typha[​](#typha) + "limits": { -**Main task**: Increases scale by reducing each node’s impact on the datastore. Runs as a daemon between the datastore and instances of Felix. Installed by default, but not configured. [Typha description](https://github.com/projectcalico/typha), and [Typha component](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/typha/). + "cpu": "1", -Typha maintains a single datastore connection on behalf of all of its clients like Felix and confd. It caches the datastore state and deduplicates events so that they can be fanned out to many listeners. Because one Typha instance can support hundreds of Felix instances, it reduces the load on the datastore by a large factor. And because Typha can filter out updates that are not relevant to Felix, it also reduces Felix’s CPU usage. In a high-scale (100+ node) Kubernetes cluster, this is essential because the number of updates generated by the API server scales with the number of nodes. + "memory": "1000Mi" -## Kubernetes components[​](#kubernetes-components-1) + }, -### Kubernetes API server[​](#kubernetes-api-server) + "requests": { -**Main task**: A Kubernetes component that validates and configures data for the API objects (for example, pods, services, and others). Proxies requests for Calico Enterprise API resources to the Kubernetes API server through an aggregation layer. + "cpu": "100m", -### kubectl[​](#kubectl) + "memory": "1000Mi" -**Main task**: The recommended command line interface for CRUD operations on Calico Enterprise and Calico objects. [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/). + } -### 'The Calico Enterprise data path: IP routing and iptables' + } -One of Calico Enterprise’s key features is how packets flow between workloads in a data center, or between a workload and the Internet, without additional encapsulation. +} +``` -In the Calico Enterprise approach, IP packets to or from a workload are routed and firewalled by the Linux routing table and iptables or eBPF infrastructure on the workload’s host. For a workload that is sending packets, Calico Enterprise ensures that the host is always returned as the next hop MAC address regardless of whatever routing the workload itself might configure. For packets addressed to a workload, the last IP hop is that from the destination workload’s host to the workload itself. +## LogCollector custom resource[​](#logcollector-custom-resource) -![Calico datapath](https://docs.tigera.io/assets/images/calico-datapath-164f6f29c7b21889c1d4b517a2695533.png) +The [LogCollector](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#logcollector) CR provides a way to configure resources for FluentdDaemonSet, EKSLogForwarderDeployment. -Suppose that IPv4 addresses for the workloads are allocated from a datacenter-private subnet of 10.65/16, and that the hosts have IP addresses from 172.18.203/24. If you look at the routing table on a host: +### FluentdDaemonSet.[​](#fluentddaemonset) + +To configure resource specification for the [FluentdDaemonSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#fluentddaemonset), patch the LogCollector CR using the below command: ```bash -route -n +kubectl patch logcollector tigera-secure --type=merge --patch='{"spec": {"fluentdDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"fluentd","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' ``` -You will see something like this: +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -```text -Kernel IP routing table +#### Verification[​](#verification-14) -Destination Gateway Genmask Flags Metric Ref Use Iface +You can verify the configured resources using the following command: -0.0.0.0 172.18.203.1 0.0.0.0 UG 0 0 0 eth0 +```bash +kubectl get daemonset.apps/fluentd-node -n tigera-fluentd -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -10.65.0.0 0.0.0.0 255.255.0.0 U 0 0 0 ns-db03ab89-b4 +This command will output the configured resource requests and limits for the FluentdDaemonSet in JSON format. -10.65.0.21 172.18.203.126 255.255.255.255 UGH 0 0 0 eth0 +```bash +{ -10.65.0.22 172.18.203.129 255.255.255.255 UGH 0 0 0 eth0 + "name": "fluentd", -10.65.0.23 172.18.203.129 255.255.255.255 UGH 0 0 0 eth0 + "resources": { -10.65.0.24 0.0.0.0 255.255.255.255 UH 0 0 0 tapa429fb36-04 + "limits": { -172.18.203.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 -``` + "cpu": "1", -There is one workload on this host with IP address 10.65.0.24, and accessible from the host via a TAP (or veth, etc.) interface named tapa429fb36-04. Hence there is a direct route for 10.65.0.24, through tapa429fb36-04. Other workloads, with the .21, .22 and .23 addresses, are hosted on two other hosts (172.18.203.126 and .129), so the routes for those workload addresses are via those hosts. + "memory": "1000Mi" -The direct routes are set up by a Calico Enterprise agent named Felix when it is asked to provision connectivity for a particular workload. A BGP client (such as BIRD) then notices those and distributes them – perhaps via a route reflector – to BGP clients running on other hosts, and hence the indirect routes appear also. + }, -## Is that all?[​](#is-that-all) + "requests": { -As far as the static data path is concerned, yes. It’s just a combination of responding to workload ARP requests with the host MAC, IP routing and iptables or eBPF. There’s a great deal more to Calico Enterprise in terms of how the required routing and security information is managed, and for handling dynamic things such as workload migration – but the basic data path really is that simple. + "cpu": "100m", -### Network design + "memory": "100Mi" - + } -## [📄️Calico over Ethernet fabrics](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric) + } -[Understand the interconnect fabric options in a Calico network.](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric) +} +``` -## [📄️Calico over IP fabrics](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l3-interconnect-fabric) +### EKSLogForwarderDeployment.[​](#ekslogforwarderdeployment) -[Understand considerations for implementing interconnect fabrics with Calico.](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l3-interconnect-fabric) +To configure resource specification for the [EKSLogForwarderDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#ekslogforwarderdeployment), patch the LogCollector CR using the below command: -### Calico over Ethernet fabrics +```bash +kubectl patch logcollector tigera-secure --type=merge --patch='{"spec": {"eksLogForwarderDeployment": {"spec": {"template": {"spec": {"containers":[{"name":"eks-log-forwarder","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` -Any technology that is capable of transporting IP packets can be used as the interconnect fabric in a Calico Enterprise network. This means that the standard tools used to transport IP, such as MPLS and Ethernet can be used in a Calico Enterprise network. +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -The focus of this article is on Ethernet as the interconnect network. Most at-scale cloud operators have converted to IP fabrics, and that infrastructure will work for Calico Enterprise as well. However, the concerns that drove most of those operators to IP as the interconnection network in their pods are largely ameliorated by Calico Enterprise, allowing Ethernet to be viably considered as a Calico Enterprise interconnect, even in large-scale deployments. +#### Verification[​](#verification-15) -## Concerns over Ethernet at scale[​](#concerns-over-ethernet-at-scale) +You can verify the configured resources using the following command: -It has been acknowledged by the industry for years that, beyond a certain size, classical Ethernet networks are unsuitable for production deployment. Although there have been [multiple](https://en.wikipedia.org/wiki/Provider_Backbone_Bridge_Traffic_Engineering) [attempts](https://web.archive.org/web/20150923231827/https://www.cisco.com/web/about/ac123/ac147/archived_issues/ipj_14-3/143_trill.html) [to address](https://en.wikipedia.org/wiki/Virtual_Private_LAN_Service) these issues, the scale-out networking community has largely abandoned Ethernet for anything other than providing physical point-to-point links in the networking fabric. The principle reasons for Ethernet failures at large scale are: +```bash +kubectl get deployment.apps/eks-log-forwarder -n tigera-fluentd -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -- Large numbers of *endpoints* ([note 1](#note-1)) +This command will output the configured resource requests and limits for the EKSLogForwarderDeployment in JSON format. - Each switch in an Ethernet network must learn the path to all Ethernet endpoints that are connected to the Ethernet network. Learning this amount of state can become a substantial task when we are talking about hundreds of thousands of *endpoints*. +```bash +{ -- High rate of *churn* or change in the network + "name": "eks-log-forwarder", - With that many endpoints, most of them being ephemeral (such as virtual machines or containers), there is a large amount of *churn* in the network. That load of re-learning paths can be a substantial burden on the control plane processor of most Ethernet switches. + "resources": { -- High volumes of broadcast traffic + "limits": { - As each node on the Ethernet network must use Broadcast packets to locate peers, and many use broadcast for other purposes, the resultant packet replication to each and every endpoint can lead to *broadcast storms* in large Ethernet networks, effectively consuming most, if not all resources in the network and the attached endpoints. + "cpu": "1", -- Spanning tree + "memory": "1000Mi" - Spanning tree is the protocol used to keep an Ethernet network from forming loops. The protocol was designed in the era of smaller, simpler networks, and it has not aged well. As the number of links and interconnects in an Ethernet network goes up, many implementations of spanning tree become more *fragile*. Unfortunately, when spanning tree fails in an Ethernet network, the effect is a catastrophic loop or partition (or both) in the network, and, in most cases, difficult to troubleshoot or resolve. + }, -Although many of these issues are crippling at *VM scale* (tens of thousands of endpoints that live for hours, days, weeks), they will be absolutely lethal at *container scale* (hundreds of thousands of endpoints that live for seconds, minutes, days). + "requests": { -If you weren't ready to turn off your Ethernet data center network before this, I bet you are now. Before you do, however, let's look at how Calico Enterprise can mitigate these issues, even in very large deployments. + "cpu": "100m", -## How does Calico Enterprise tame the Ethernet daemons?[​](#how-does-calico-enterprise-tame-the-ethernet-daemons) + "memory": "100Mi" -First, let's look at how Calico Enterprise uses an Ethernet interconnect fabric. It's important to remember that an Ethernet network *sees* nothing on the other side of an attached IP router, the Ethernet network just *sees* the router itself. This is why Ethernet switches can be used at Internet peering points, where large fractions of Internet traffic is exchanged. The switches only see the routers from the various ISPs, not those ISPs' customers' nodes. We leverage the same effect in Calico Enterprise. + } -To take the issues outlined above, let's revisit them in a Calico Enterprise context. + } -- Large numbers of endpoints +} +``` - In a Calico Enterprise network, the Ethernet interconnect fabric only sees the routers/compute servers, not the endpoint. In a standard cloud model, where there is tens of VMs per server (or hundreds of containers), this reduces the number of nodes that the Ethernet sees (and has to learn) by one to two orders of magnitude. Even in very large pods (say twenty thousand servers), the Ethernet network would still only see a few tens of thousands of endpoints. Well within the scale of any competent data center Ethernet top of rack (ToR) switch. +## LogStorage custom resource[​](#logstorage-custom-resource) -- High rate of churn +The [LogStorage](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#logstorage) CR provides a way to configure resources for ECKOperatorStatefulSet, Kibana, LinseedDeployment, ElasticsearchMetricsDeployment. The following sections provide example configurations for this CR. - In a classical Ethernet data center fabric, there is a *churn* event each time an endpoint is created, destroyed, or moved. In a large data center, with hundreds of thousands of endpoints, this *churn* could run into tens of events per second, every second of the day, with peaks easily in the hundreds or thousands of events per second. In a Calico Enterprise network, however, the *churn* is very low. The only event that would lead to *churn* orders of magnitude more than what is normally experienced), there would only be two thousand events per **day**. Any switch that cannot handle that volume of change in the network should not be used for any application. +### ECKOperatorStatefulSet.[​](#eckoperatorstatefulset) -- High volume of broadcast traffic +To configure resource specification for the [ECKOperatorStatefulSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#eckoperatorstatefulset), patch the LogStorage CR using the below command: - Because the first (and last) hop for any traffic in a Calico Enterprise network is an IP hop, and IP hops terminate broadcast traffic, there is no endpoint broadcast network in the Ethernet fabric, period. In fact, the only broadcast traffic that should be seen in the Ethernet fabric is the ARPs of the compute servers locating each other. If the traffic pattern is fairly consistent, the steady-state ARP rate should be almost zero. Even in a pathological case, the ARP rate should be well within normal accepted boundaries. +```bash +kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"eckOperatorStatefulSet":{"spec": {"template": {"spec": {"containers":[{"name":"manager","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` -- Spanning tree +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). - Depending on the architecture chosen for the Ethernet fabric, it may even be possible to turn off spanning tree. However, even if it is left on, due to the reduction in node count, and reduction in churn, most competent spanning tree implementations should be able to handle the load without stress. +#### Verification[​](#verification-16) -With these considerations in mind, it should be evident that an Ethernet connection fabric in Calico Enterprise is not only possible, it is practical and should be seriously considered as the interconnect fabric for a Calico Enterprise network. +You can verify the configured resources using the following command: -As mentioned in the IP fabric post, an IP fabric is also quite feasible for Calico Enterprise, but there are more considerations that must be taken into account. The Ethernet fabric option has fewer architectural considerations in its design. +```bash +kubectl get statefulset.apps/elastic-operator -n tigera-eck-operator -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -## A brief note about Ethernet topology[​](#a-brief-note-about-ethernet-topology) +This command will output the configured resource requests and limits for the ECKOperatorStatefulSet in JSON format. -As mentioned elsewhere in the Calico Enterprise documentation, because Calico Enterprise can use most of the standard IP tooling, some interesting options regarding fabric topology become possible. +```bash +{ -We assume that an Ethernet fabric for Calico Enterprise would most likely be constructed as a *leaf/spine* architecture. Other options are possible, but the *leaf/spine* is the predominant architectural model in use in scale-out infrastructure today. + "name": "manager", -Because Calico Enterprise is an IP routed fabric, a Calico Enterprise network can use [ECMP](https://en.wikipedia.org/wiki/Equal-cost_multi-path_routing) to distribute traffic across multiple links (instead of using Ethernet techniques such as MLAG). By leveraging ECMP load balancing on the Calico Enterprise compute servers, it is possible to build the fabric out of multiple *independent* leaf/spine planes using no technologies other than IP routing in the Calico Enterprise nodes, and basic Ethernet switching in the interconnect fabric. These planes would operate completely independently and could be designed such that they would not share a fault domain. This would allow for the catastrophic failure of one (or more) plane(s) of Ethernet interconnect fabric without the loss of the pod (the failure would just decrease the amount of interconnect bandwidth in the pod). This is a gentler failure mode than the pod-wide IP or Ethernet failure that is possible with today's designs. + "resources": { -You might find this [Facebook blog post](https://engineering.fb.com/2014/11/14/production-engineering/introducing-data-center-fabric-the-next-generation-facebook-data-center-network/) on their fabric approach interesting. A graphic to visualize the idea is shown below. + "limits": { -![Ethernet spine planes](https://docs.tigera.io/assets/images/l2-spine-planes-d1685acaabb4c4a56f5b79d9932f8796.png) + "cpu": "1", -The diagram does not show the endpoints in this diagram, and the endpoints would be unaware of anything in the fabric (as noted above). + "memory": "1000Mi" -In this diagram, each ToR is segmented into four logical switches (possibly by using 'port VLANs'), ([note 2](#note-2)) and each compute server has a connection to each of those logical switches. We will identify those logical switches by their color. Each ToR would then have a blue, green, orange, and red logical switch. Those 'colors' would be members of a given *plane*, so there would be a blue plane, a green plane, an orange plane, and a red plane. Each plane would have a dedicated spine switch. and each ToR in a given spine would be connected to its spine, and only its spine. + }, -Each plane would constitute an IP network, so the blue plane would be 2001:db8:1000::/36, the green would be 2001:db8:2000::/36, and the orange and red planes would be 2001:db8:3000::/36 and 2001:db8:4000::/36 respectively ([note 3](#note-3)). + "requests": { -Each IP network (plane) requires its own BGP route reflectors. Those route reflectors need to be peered with each other within the plane, but the route reflectors in each plane do not need to be peered with one another. Therefore, a fabric of four planes would have four route reflector meshes. Each compute server, border router, *etc.* would need to be a route reflector client of at least one route reflector in each plane, and very preferably two or more in each plane. + "cpu": "100m", -The following diagram visualizes the route reflector environment. + "memory": "100Mi" -![route-reflector](https://docs.tigera.io/assets/images/l2-rr-spine-planes-d10ad67fe16f2c08329e0baf80f213fc.png) + } -These route reflectors could be dedicated hardware connected to the spine switches (or the spine switches themselves), or physical or virtual route reflectors connected to the necessary logical leaf switches (blue, green, orange, and red). That may be a route reflector running on a compute server and connected directly to the correct plane link, and not routed through the vRouter, to avoid the chicken and egg problem that would occur if the route reflector were "behind" the Calico Enterprise network. + } -Other physical and logical configurations and counts are, of course, possible, this is just an example. +} +``` -The logical configuration would then have each compute server would have an address on each plane's subnet, and announce its endpoints on each subnet. If ECMP is then turned on, the compute servers would distribute the load across all planes. +### Kibana[​](#kibana) -If a plane were to fail (say due to a spanning tree failure), then only that one plane would fail. The remaining planes would stay running. +To configure resource specification for the [Kibana](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#kibana), patch the LogStorage CR using the below command: -### Footnotes[​](#footnotes) +```bash +kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"kibana":{"spec": {"template": {"spec": {"containers":[{"name":"kibana","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` -### Note 1[​](#note-1) +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -In this document (and in all Calico Enterprise documents) we tend to use the term *endpoint* to refer to a virtual machine, container, appliance, bare metal server, or any other entity that is connected to a Calico Enterprise network. If we are referring to a specific type of endpoint, we will call that out (such as referring to the behavior of VMs as distinct from containers). +#### Verification[​](#verification-17) -### Note 2[​](#note-2) +You can verify the configured resources using the following command: -We are using logical switches in this example. Physical ToRs could also be used, or a mix of the two (say 2 logical switches hosted on each physical switch). +```bash +kubectl get deployment.apps/tigera-secure-kb -n tigera-kibana -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -### Note 3[​](#note-3) +This command will output the configured resource requests and limits for the Kibana in JSON format. -We use IPv6 here purely as an example. IPv4 would be configured similarly. +```bash +{ -### Calico over IP fabrics + "name": "kibana", -Calico Enterprise provides an end-to-end IP network that interconnects the endpoints ([note 1](#note-1)) in a scale-out or cloud environment. To do that, it needs an *interconnect fabric* to provide the physical networking layer on which Calico Enterprise operates ([note 2](#note-2)). + "resources": { -Although Calico Enterprise is designed to work with any underlying interconnect fabric that can support IP traffic, the fabric that has the least considerations attached to its implementation is an Ethernet fabric as discussed in [Calico over Ethernet fabrics](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric). + "limits": { -In most cases, the Ethernet fabric is the appropriate choice, but there are infrastructures where L3 (an IP fabric) has already been deployed, or will be deployed, and it makes sense for Calico Enterprise to operate in those environments. + "cpu": "1", -However, because Calico Enterprise is, itself, a routed infrastructure, there are more engineering, architecture, and operations considerations that have to be weighed when running Calico Enterprise with an IP routed interconnection fabric. We will briefly outline those in the rest of this post. That said, Calico Enterprise operates equally well with Ethernet or IP interconnect fabrics. + "memory": "1000Mi" -## Background[​](#background) + }, -### Basic Calico Enterprise architecture overview[​](#basic-calico-enterprise-architecture-overview) + "requests": { -A description of the Calico Enterprise architecture can be found in our [architectural overview](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/overview). However, a brief discussion of the routing and data paths is useful for the discussion. + "cpu": "100m", -In a Calico Enterprise network, each compute server acts as a router for all of the endpoints that are hosted on that compute server. We call that function a vRouter. The data path is provided by the Linux kernel, the control plane by a BGP protocol server, and management plane by Calico Enterprise's on-server agent, *Felix*. + "memory": "100Mi" -Each endpoint can only communicate through its local vRouter, and the first and last *hop* in any Calico Enterprise packet flow is an IP router hop through a vRouter. Each vRouter announces all of the endpoints it is attached to all the other vRouters and other routers on the infrastructure fabric, using BGP, usually with BGP route reflectors to increase scale. A discussion of why we use BGP can be found in [Why BGP?](https://www.tigera.io/blog/why-bgp/). + } -Access control lists (ACLs) enforce security (and other) policy as directed by whatever cloud orchestrator is in use. There are other components in the Calico Enterprise architecture, but they are irrelevant to the interconnect network fabric discussion. + } -### Overview of current common IP scale-out fabric architectures[​](#overview-of-current-common-ip-scale-out-fabric-architectures) +} +``` -There are two approaches to building an IP fabric for a scale-out infrastructure. However, all of them, to date, have assumed that the edge router in the infrastructure is the top of rack (TOR) switch. In the Calico Enterprise model, that function is pushed to the compute server itself. +### LinseedDeployment[​](#linseeddeployment) -The two approaches are: +To configure resource specification for the [LinseedDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#linseeddeployment), patch the LogStorage CR using the below command: -**Routing infrastructure is based on some form of IGP** +```bash +kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"linseedDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-linseed","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` -Due to the limitations in scale of IGP networks, the Calico Enterprise team does not believe that using an IGP to distribute endpoint reachability information will adequately scale in a Calico Enterprise environment. However, it is possible to use a combination of IGP and BGP in the interconnect fabric, where an IGP communicates the path to the *next-hop* router (in Calico Enterprise, this is often the destination compute server) and BGP is used to distribute the actual next-hop for a given endpoint. This is a valid model, and, in fact is the most common approach in a widely distributed IP network (say a carrier's backbone network). The design of these networks is somewhat complex though, and will not be addressed further in this article. ([note 3](#note-3)). +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -**Routing infrastructure is based entirely on BGP** +#### Verification[​](#verification-18) -In this model, the IP network is "tight enough" or has a small enough diameter that BGP can be used to distribute endpoint routes, and the paths to the next-hops for those routes is known to all of the routers in the network (in a Calico Enterprise network this includes the compute servers). This is the network model that this note will address. +You can verify the configured resources using the following command: -In this article, we will cover the second option because it is more common in the scale-out world. +```bash +kubectl get deployment.apps/tigera-linseed -n tigera-elasticsearch -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -### BGP-only interconnect fabrics[​](#bgp-only-interconnect-fabrics) +This command will output the configured resource requests and limits for the LinseedDeployment in JSON format. -There are multiple methods to build a BGP-only interconnect fabric. We will focus on three models, each with two widely viable variations. There are other options, and we will briefly touch on why we didn't include some of them in [Other Options](#other-options). +```bash +{ -The two methods are: + "name": "tigera-linseed", -- A BGP fabric where each of the TOR switches (and their subsidiary compute servers) are a unique [Autonomous System (AS)](https://en.wikipedia.org/wiki/Autonomous_System_\(Internet\)) and they are interconnected via either an Ethernet switching plane provided by the spine switches in a [leaf/spine](http://bradhedlund.com/2012/10/24/video-a-basic-introduction-to-the-leafspine-data-center-networking-fabric-design/) architecture, or via a set of spine switches, each of which is also a unique AS. We'll refer to this as the *AS per rack* model. This model is detailed in [IETF RFC 7938](https://datatracker.ietf.org/doc/html/rfc7938). + "resources": { -- A BGP fabric where each of the compute servers is a unique AS, and the TOR switches make up a transit AS. We'll refer to this as the *AS per server* model. + "limits": { -Each of these models can either have an Ethernet or IP spine. In the case of an Ethernet spine, each spine switch provides an isolated Ethernet connection *plane* as in the Calico Enterprise Ethernet interconnect fabric model and each TOR switch is connected to each spine switch. + "cpu": "1", -Another model is where each spine switch is a unique AS, and each TOR switch BGP peers with each spine switch. In both cases, the TOR switches use ECMP to load-balance traffic between all available spine switches. + "memory": "1000Mi" -### BGP network design considerations[​](#bgp-network-design-considerations) + }, -Contrary to popular opinion, BGP is actually a fairly simple protocol. For example, the BGP configuration on a Calico Enterprise compute server is approximately sixty lines long, not counting comments. The perceived complexity is due to the things that you can *do* with BGP. Many uses of BGP involve complex policy rules, where the behavior of BGP can be modified to meet technical (or business, financial, political, etc.) requirements. A default Calico Enterprise network does not venture into those areas, ([note 4](#note-4)) and therefore is fairly straight forward. + "requests": { -That said, there are a few design rules for BGP that need to be kept in mind when designing an IP fabric that will interconnect nodes in a Calico Enterprise network. These BGP design requirements *can* be worked around, if necessary, but doing so takes the designer out of the standard BGP *envelope* and should only be done by an implementer who is *very* comfortable with advanced BGP design. + "cpu": "100m", -These considerations are: + "memory": "100Mi" -- AS continuity or *AS puddling* + } - Any router in an AS *must* be able to communicate with any other router in that same AS without transiting another AS. + } -- Next hop behavior +} +``` - By default BGP routers do not change the *next hop* of a route if it is peering with another router in its same AS. The inverse is also true, a BGP router will set itself as the *next hop* of a route if it is peering with a router in another AS. +### ElasticsearchMetricsDeployment[​](#elasticsearchmetricsdeployment) -- Route reflection +To configure resource specification for the [ElasticsearchMetricsDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#elasticsearchmetricsdeployment), patch the LogStorage CR using the below command: - All BGP routers in a given AS must *peer* with all the other routers in that AS. This is referred to a *complete BGP mesh*. This can become problematic as the number of routers in the AS scales up. The use of *route reflectors* reduce the need for the complete BGP mesh. However, route reflectors also have scaling considerations. +```bash +kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"elasticsearchMetricsDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-elasticsearch-metrics","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}}]}}}}}}' +``` -- Endpoints +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). - In a Calico Enterprise network, each endpoint is a route. Hardware networking platforms are constrained by the number of routes they can learn. This is usually in range of 10,000's or 100,000's of routes. Route aggregation can help, but that is usually dependent on the capabilities of the scheduler used by the orchestration software (*e.g.* OpenStack). +#### Verification[​](#verification-19) -A deeper discussion of these considerations can be found in the [IP Fabric Design Considerations](#ip-fabric-design-considerations). +You can verify the configured resources using the following command: -The designs discussed below address these considerations. +```bash +kubectl get deployment.apps/tigera-elasticsearch-metrics -n tigera-elasticsearch -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -### The AS Per Rack model[​](#the-as-per-rack-model) +This command will output the configured resource requests and limits for the ElasticsearchMetricsDeployment in JSON format. -This model is the closest to the model suggested by [IETF RFC 7938](https://datatracker.ietf.org/doc/html/rfc7938). +```bash +{ -As mentioned earlier, there are two versions of this model, one with an set of Ethernet planes interconnecting the ToR switches, and the other where the core planes are also routers. The following diagrams may be useful for the discussion. + "name": "tigera-elasticsearch-metrics", -![Diagram showing the AS per rack model with ToR switches meshed via Ethernet switching planes at the spine layer](https://docs.tigera.io/assets/images/l3-fabric-diagrams-as-rack-l2-spine-586a942656c4718cae0d17e78de81a15.png) + "resources": { -The diagram above shows the **AS per rack model** where the ToR switches are physically meshed via a set of Ethernet switching planes. + "limits": { -![Diagram showing the AS per rack model with ToR switches meshed via discrete BGP spine routers, each in their own AS](https://docs.tigera.io/assets/images/l3-fabric-diagrams-as-rack-l3-spine-731d38ec8419d6e7a50a6ee9a610bdf1.png) + "cpu": "1", -The diagram above shows the **AS per rack model** where the ToR switches are physically meshed via a set of discrete BGP spine routers, each in their own AS. + "memory": "1000Mi" -In this approach, every ToR-ToR or ToR-Spine (in the case of an AS per spine) link is an eBGP peering which means that there is no route-reflection possible (using standard BGP route reflectors) *north* of the ToR switches. + }, -If the L2 spine option is used, the result of this is that each ToR must either peer with every other ToR switch in the cluster (which could be hundreds of peers). + "requests": { -If the AS per spine option is used, then each ToR only has to peer with each spine (there are usually somewhere between two and sixteen spine switches in a pod). However, the spine switches must peer with all ToR switches (again, that would be hundreds, but most spine switches have more control plane capacity than the average ToR, so this might be more scalable in many circumstances). + "cpu": "100m", -Within the rack, the configuration is the same for both variants, and is somewhat different than the configuration north of the ToR. + "memory": "1000Mi" -Every router within the rack, which, in the case of Calico Enterprise is every compute server, shares the same AS as the ToR that they are connected to. That connection is in the form of an Ethernet switching layer. Each router in the rack must be directly connected to enable the AS to remain contiguous. The ToR's *router* function is then connected to that Ethernet switching layer as well. The actual configuration of this is dependent on the ToR in use, but usually it means that the ports that are connected to the compute servers are treated as *subnet* or *segment* ports, and then the ToR's *router* function has a single interface into that subnet. + } -This configuration allows each compute server to connect to each other compute server in the rack without going through the ToR router, but it will, of course, go through the ToR switching function. The compute servers and the ToR router could all be directly meshed, or a route reflector could be used within the rack, either hosted on the ToR itself, or as a virtual function hosted on one or more compute servers within the rack. + } -The ToR, as the eBGP router redistributes all of the routes from other ToRs as well as routes external to the data center to the compute servers that are in its AS, and announces all of the routes from within the AS (rack) to the other ToRs and the larger world. This means that each compute server will see the ToR as the next hop for all external routes, and the individual compute servers are the next hop for all routes internal to the rack. +} +``` -### The AS per Compute Server model[​](#the-as-per-compute-server-model) +## ManagementClusterConnection custom resource[​](#managementclusterconnection-custom-resource) -This model takes the concept of an AS per rack to its logical conclusion. In the earlier referenced [IETF RFC 7938](https://datatracker.ietf.org/doc/html/rfc7938) the assumption in the overall model is that the ToR is first tier aggregating and routing element. In Calico Enterprise, the ToR, if it is an L3 router, is actually the second tier. Remember, in Calico Enterprise, the compute server is always the first/last router for an endpoint, and is also the first/last point of aggregation. +The [ManagementClusterConnection](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#managementclusterconnection) CR provides a way to configure resources for GuardianDeployment. The following sections provide example configurations for this CR. -Therefore, if we follow the architecture of the draft, the compute server, not the ToR should be the AS boundary. The differences can be seen in the following two diagrams. +### GuardianDeployment[​](#guardiandeployment) -![Diagram showing the AS per compute server model with ToR switches meshed via Ethernet switching planes at the spine layer](https://docs.tigera.io/assets/images/l3-fabric-diagrams-as-server-l2-spine-ef320fdea22b2f69da6211d3731a3c32.png) +To configure resource specification for the [GuardianDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#guardiandeployment), patch the ManagementClusterConnection CR using the below command: -The diagram above shows the *AS per compute server model* where the ToR switches are physically meshed via a set of Ethernet switching planes. +```bash +kubectl patch managementclusterconnection tigera-secure --type=merge --patch='{"spec": {"guardianDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-guardian","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` -![Diagram showing the AS per compute server model with ToR switches connected to independent routing planes at the spine layer](https://docs.tigera.io/assets/images/l3-fabric-diagrams-as-server-l3-spine-0515c7852f8f7aaf4d550012ff10b5fe.png) +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -The diagram above shows the *AS per compute server model* where the ToR switches are physically connected to a set of independent routing planes. +#### Verification[​](#verification-20) -As can be seen in these diagrams, there are still the same two variants as in the *AS per rack* model, one where the spine switches provide a set of independent Ethernet planes to interconnect the ToR switches, and the other where that is done by a set of independent routers. +You can verify the configured resources using the following command: -The real difference in this model, is that the compute servers as well as the ToR switches are all independent autonomous systems. To make this work at scale, the use of four byte AS numbers as discussed in [RFC 4893](http://www.faqs.org/rfcs/rfc4893.html). Without using four byte AS numbering, the total number of ToRs and compute servers in a Calico Enterprise fabric would be limited to the approximately five thousand available private AS ([note 5](#note-5)) numbers. If four byte AS numbers are used, there are approximately ninety-two million private AS numbers available. This should be sufficient for any given Calico Enterprise fabric. +```bash +kubectl get deployment.apps/tigera-guardian -n tigera-guardian -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -The other difference in this model *vs.* the AS per rack model, is that there are no route reflectors used, as all BGP peerings are eBGP. In this case, each compute server in a given rack peers with its ToR switch which is also acting as an eBGP router. For two servers within the same rack to communicate, they will be routed through the ToR. Therefore, each server will have one peering to each ToR it is connected to, and each ToR will have a peering with each compute server that it is connected to (normally, all the compute servers in the rack). +This command will output the configured resource requests and limits for the GuardianDeployment in JSON format. -The inter-ToR connectivity considerations are the same in scale and scope as in the AS per rack model. +```bash +{ -### The Downward Default model[​](#the-downward-default-model) + "name": "tigera-guardian", -The final model is a bit different. Whereas, in the previous models, all of the routers in the infrastructure carry full routing tables, and leave their AS paths intact, this model ([note 6](#note-6)) removes the AS numbers at each stage of the routing path. This is to prevent routes from other nodes in the network from not being installed due to it coming from the *local* AS (since they share the source and dest of the route share the same AS). + "resources": { -The following diagram will show the AS relationships in this model. + "limits": { -![Diagram showing the downward default model where all Calico nodes share one AS and all ToR switches share another, with spine routers announcing default routes downward](https://docs.tigera.io/assets/images/l3-fabric-downward-default-30bce2fe705b14f16d7381cf5612a81c.png) + "cpu": "1", -In the diagram above, we are showing that all Calico Enterprise nodes share the same AS number, as do all ToR switches. However, those ASs are different (*A1* is not the same network as *A2*, even though the both share the same AS number *A* ). + "memory": "1000Mi" -Although the use of a single AS for all ToR switches, and another for all compute servers simplifies deployment (standardized configuration), the real benefit comes in the offloading of the routing tables in the ToR switches. + }, -In this model, each router announces all of its routes to its upstream peer (the Calico Enterprise routers to their ToR, the ToRs to the spine switches). However, in return, the upstream router only announces a default route. In this case, a given Calico Enterprise router only has routes for the endpoints that are locally hosted on it, as well as the default from the ToR. Because the ToR is the only route for the Calico Enterprise network the rest of the network, this matches reality. The same happens between the ToR switches and the spine. This means that the ToR only has to install the routes that are for endpoints that are hosted on its downstream Calico Enterprise nodes. Even if we were to host 200 endpoints per Calico Enterprise node, and stuff 80 Calico Enterprise nodes in each rack, that would still limit the routing table on the ToR to a maximum of 16,000 entries (well within the capabilities of even the most modest of switches). + "requests": { -Because the default is originated by the Spine (originally) there is no chance for a downward announced route to originate from the recipient's AS, preventing the **AS puddling** problem. + "cpu": "100m", -There is one (minor) drawback to this model, in that all traffic that is destined for an invalid destination (the destination IP does not exist) will be forwarded to the spine switches before they are dropped. + "memory": "100Mi" -It should also be noted that the spine switches do need to carry all of the Calico Enterprise network routes, just as they do in the routed spines in the previous examples. In short, this model imposes no more load on the spines than they already would have, and substantially reduces the amount of routing table space used on the ToR switches. It also reduces the number of routes in the Calico Enterprise nodes, but, as we have discussed before, that is not a concern in most deployments as the amount of memory consumed by a full routing table in Calico Enterprise is a fraction of the total memory available on a modern compute server. + } -## Recommendation[​](#recommendation) + } -The Calico Enterprise team recommends the use of the [AS per rack](#the-as-per-rack-model) model if the resultant routing table size can be accommodated by the ToR and spine switches, remembering to account for projected growth. +} +``` -If there is concern about the route table size in the ToR switches, the Calico Enterprise recommends the [Downward Default](#the-downward-default-model) model. +## Manager custom resource[​](#manager-custom-resource) -If there are concerns about both the spine and ToR switch route table capacity, or there is a desire to run a very simple L2 fabric to connect the Calico Enterprise nodes, then the user should consider the Ethernet fabric as detailed in [Calico over Ethernet fabrics](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric). +The [Manager](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#manager) CR provides a way to configure resources for ManagerDeployment. The following sections provide example configurations for this CR. -If you are interested in the AS per compute server, the Calico Enterprise team would be very interested in discussing the deployment of that model. +### ManagerDeployment[​](#managerdeployment) -## Other options[​](#other-options) +To configure resource specification for the [ManagerDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#managerdeployment), patch the Manager CR using the below command: -The way the physical and logical connectivity is laid out in this article, and the [Ethernet fabric](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric), the next hop router for a given route is always directly connected to the router receiving that route. This makes the need for another protocol to distribute the next hop routes unnecessary. +```bash +kubectl patch manager tigera-secure --type=merge --patch='{"spec": {"managerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-voltron","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-ui-apis","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-manager","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` -However, in many (or most) WAN BGP networks, the routers within a given AS may not be directly adjacent. Therefore, a router may receive a route with a next hop address that it is not directly adjacent to. In those cases, an IGP, such as OSPF or IS-IS, is used by the routers within a given AS to determine the path to the BGP next hop route. +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -There may be Calico Enterprise architectures where there are similar models where the routers within a given AS are not directly adjacent. In those models, the use of an IGP in Calico Enterprise may be warranted. The configuration of those protocols are, however, beyond the scope of this technical note. +#### Verification[​](#verification-21) -### IP fabric design considerations[​](#ip-fabric-design-considerations) +You can verify the configured resources using the following command: -**AS puddling** +```bash +kubectl get deployment.apps/tigera-manager -n tigera-manager -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -The first consideration is that an AS must be kept contiguous. This means that any two nodes in a given AS must be able to communicate without traversing any other AS. If this rule is not observed, the effect is often referred to as *AS puddling* and the network will *not* function correctly. +This command will output the configured resource requests and limits for the ManagerDeployment in JSON format. -A corollary of that rule is that any two administrative regions that share the same AS number, are in the same AS, even if that was not the desire of the designer. BGP has no way of identifying if an AS is local or foreign other than the AS number. Therefore re-use of an AS number for two *networks* that are not directly connected, but only connected through another *network* or AS number will not work without a lot of policy changes to the BGP routers. +```bash +{ -Another corollary of that rule is that a BGP router will not propagate a route to a peer if the route has an AS in its path that is the same AS as the peer. This prevents loops from forming in the network. The effect of this prevents two routers in the same AS from transiting another router (either in that AS or not). + "name": "tigera-ui-apis", -**Next hop behavior** + "resources": { -Another consideration is based on the differences between iBGP and eBGP. BGP operates in two modes, if two routers are BGP peers, but share the same AS number, then they are considered to be in an *internal* BGP (or iBGP) peering relationship. If they are members of different AS's, then they are in an *external* or eBGP relationship. + "limits": { -BGP's original design model was that all BGP routers within a given AS would know how to get to one another (via static routes, IGP ([note 7](#note-7)) routing protocols, or the like), and that routers in different ASs would not know how to reach one another unless they were directly connected. + "cpu": "1", -Based on that design point, routers in an iBGP peering relationship assume that they do not transit traffic for other iBGP routers in a given AS (i.e. A can communicate with C, and therefore will not need to route through B), and therefore, do not change the *next hop* attribute in BGP ([note 8](#note-8)). + "memory": "1000Mi" -A router with an eBGP peering, on the other hand, assumes that its eBGP peer will not know how to reach the next hop route, and then will substitute its own address in the next hop field. This is often referred to as *next hop self*. + }, -In the Calico Enterprise [Ethernet fabric](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric) model, all of the compute servers (the routers in a Calico Enterprise network) are directly connected over one or more Ethernet network(s) and therefore are directly reachable. In this case, a router in the Calico Enterprise network does not need to set *next hop self* within the Calico Enterprise fabric. + "requests": { -The models we present in this article ensure that all routes that may traverse a non-Calico Enterprise router are eBGP routes, and therefore *next hop self* is automatically set correctly. If a deployment of Calico Enterprise in an IP interconnect fabric does not satisfy that constraint, then *next hop self* must be appropriately configured. + "cpu": "100m", -**Route reflection** + "memory": "100Mi" -As mentioned above, BGP expects that all of the iBGP routers in a network can see (and speak) directly to one another, this is referred to as a *BGP full mesh*. In small networks this is not a problem, but it does become interesting as the number of routers increases. For example, if you have 99 BGP routers in an AS and wish to add one more, you would have to configure the peering to that new router on each of the 99 existing routers. Not only is this a problem at configuration time, it means that each router is maintaining 100 protocol adjacencies, which can start being a drain on constrained resources in a router. While this might be *interesting* at 100 routers, it becomes an impossible task with 1000's or 10,000's of routers (the potential size of a Calico Enterprise network). + } -Conveniently, large scale/Internet scale networks solved this problem almost 20 years ago by deploying BGP route reflection as described in [RFC 1966](http://www.faqs.org/rfcs/rfc1966.html). This is a technique supported by almost all BGP routers today. In a large network, a number of route reflectors ([note 9](#note-9)) are evenly distributed and each iBGProuter is *peered* with one or more route reflectors (usually 2 or 3). Each route reflector can handle 10's or 100's of route reflector clients (in Calico Enterprise's case, the compute server), depending on the route reflector being used. Those route reflectors are, in turn, peered with each other. This means that there are an order of magnitude less route reflectors that need to be completely meshed, and each route reflector client is only configured to peer to 2 or 3 route reflectors. This is much easier to manage. + } -Other route reflector architectures are possible, but those are beyond the scope of this document. +} -**Endpoints** +{ -The final consideration is the number of endpoints in a Calico Enterprise network. In the [Ethernet fabric](https://docs.tigera.io/calico-enterprise/latest/reference/architecture/design/l2-interconnect-fabric) case the number of endpoints is not constrained by the interconnect fabric, as the interconnect fabric does not *see* the actual endpoints, it only *sees* the actual vRouters, or compute servers. This is not the case in an IP fabric, however. IP networks forward by using the destination IP address in the packet, which, in Calico Enterprise's case, is the destination endpoint. That means that the IP fabric nodes (ToR switches and/or spine switches, for example) must know the routes to each endpoint in the network. They learn this by participating as route reflector clients in the BGP mesh, just as the Calico Enterprise vRouter/compute server does. + "name": "tigera-voltron", -However, unlike a compute server which has a relatively unconstrained amount of memory, a physical switch is either memory constrained, or quite expensive. This means that the physical switch has a limit on how many *routes* it can handle. The current industry standard for modern commodity switches is in the range of 128,000 routes. This means that, without other routing *tricks*, such as aggregation, a Calico Enterprise installation that uses an IP fabric will be limited to the routing table size of its constituent network hardware, with a reasonable upper limit today of 128,000 endpoints. + "resources": { -### Footnotes[​](#footnotes) + "limits": { -### Note 1[​](#note-1) + "cpu": "1", -In Calico Enterprise's terminology, an endpoint is an IP address and interface. It could refer to a VM, a container, or even a process bound to an IP address running on a bare metal server. + "memory": "1000Mi" -### Note 2[​](#note-2) + }, -This interconnect fabric provides the connectivity between the Calico Enterprise (v)Router (in almost all cases, the compute servers) nodes, as well as any other elements in the fabric (*e.g.* bare metal servers, border routers, and appliances). + "requests": { -### Note 3[​](#note-3) + "cpu": "100m", -If there is interest in a discussion of this approach, please let us know. The Calico Enterprise team could either arrange a discussion, or if there was enough interest, publish a follow-up tech note. + "memory": "100Mi" -### Note 4[​](#note-4) + } -However those tools are available if a given Calico Enterprise instance needs to utilize those policy constructs. + } -### Note 5[​](#note-5) +} -The two byte AS space reserves approximately the last five thousand AS numbers for private use. There is no technical reason why other AS numbers could not be used. However the re-use of global scope AS numbers within a private infrastructure is strongly discouraged. The chance for routing system failure or incorrect routing is substantial, and not restricted to the entity that is doing the reuse. +{ -### Note 6[​](#note-6) + "name": "tigera-manager", -We first saw this design in a customer's lab, and thought it innovative enough to share (we asked them first, of course). Similar **AS Path Stripping** approaches are used in ISP networks, however. + "resources": { -### Note 7[​](#note-7) + "limits": { -An Interior Gateway Protocol is a local routing protocol that does not cross an AS boundary. The primary IGPs in use today are OSPF and IS-IS. While complex iBGP networks still use IGP routing protocols, a data center is normally a fairly simple network, even if it has many routers in it. Therefore, in the data center case, the use of an IGP can often be disposed of. + "cpu": "1", -### Note 8[​](#note-8) + "memory": "1000Mi" -A Next hop is an attribute of a route announced by a routing protocol. In simple terms a route is defined by a *target*, or the destination that is to be reached, and a *next hop*, which is the next router in the path to reach that target. There are many other characteristics in a route, but those are well beyond the scope of this post. + }, -### Note 9[​](#note-9) + "requests": { -A route reflector may be a physical router, a software appliance, or simply a BGP daemon. It only processes routing messages, and does not pass actual data plane traffic. However, some route reflectors are co-resident on regular routers that do pass data plane traffic. Although they may sit on one platform, the functions are distinct. + "cpu": "100m", -### Component resources + "memory": "100Mi" - + } -## [📄️Configuring the Calico Enterprise CNI plugins](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration) + } -[Details for configuring the Calico Enterprise CNI plugins.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configuration) +} +``` -## [📄️Configure resource requests and limits](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configure-resources) +## Monitor custom resource[​](#monitor-custom-resource) -[Configure Resource requests and limits.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/configure-resources) +The [Monitor](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#monitor) CR provides a way to configure resources for Prometheus, Alertmanager. The following sections provide example configurations for this CR. -## [🗃Calico Enterprise Kubernetes controllers](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/) +### Prometheus[​](#prometheus) -[2 items](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/) +To configure resource specification for the [Prometheus](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#prometheus), Resources for the default container "prometheus" can be configured using the "resources" field under "commonPrometheusFields". For all other injected containers, such as "authn-proxy", resource configuration can be set using the "containers" struct, as shown below in the patch command below. -## [🗃Calico Enterprise node (cnx-node)](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/) +```bash +kubectl patch monitor tigera-secure --type=merge --patch='{"spec": {"prometheus": {"spec":{ "commonPrometheusFields": {"resources": {"limits": {"cpu":"500m","memory":"500Mi"}, "requests": {"cpu":"50m", "memory":"50Mi"}}, "containers":[{"name":"authn-proxy","resources":{"limits": {"cpu":"250m","memory":"500Mi"},"requests": {"cpu":"25m","memory":"50Mi"}}}]}}}}}' +``` -[2 items](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/) +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -## [🗃Typha for scaling](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/typha/) +#### Verification[​](#verification-22) -[3 items](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/typha/) +You can verify the configured resources using the following command: -### Configuring the Calico Enterprise CNI plugins +```bash +kubectl get statefulset.apps/prometheus-calico-node-prometheus -n tigera-prometheus -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` - +This command will output the configured resource requests and limits for the Prometheus in JSON format. - +> **SECONDARY:** The "config-reloader" container has default resource values set based by the Prometheus resource. -**Tab: Operator** +```bash +{ -The Calico Enterprise CNI plugins do not need to be configured directly when installed by the operator. For a complete operator configuration reference, see [the installation API reference documentation](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). + "name": "prometheus", -**Tab: Manifest** + "resources": { -The Calico Enterprise CNI plugin is configured through the standard CNI [configuration mechanism](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration) + "limits": { -A minimal configuration file that uses Calico Enterprise for networking and IPAM looks like this + "cpu": "500m", -```json -{ + "memory": "500Mi" - "name": "any_name", + }, - "cniVersion": "0.1.0", + "requests": { - "type": "calico", + "cpu": "50m", - "ipam": { + "memory": "50Mi" - "type": "calico-ipam" + } } } -``` - -If the `node` container on a node registered with a `NODENAME` other than the node hostname, the CNI plugin on this node must be configured with the same `nodename`: -```json { - "name": "any_name", - - "nodename": "", - - "type": "calico", + "name": "config-reloader", - "ipam": { + "resources": { - "type": "calico-ipam" + "limits": { - } + "cpu": "10m", -} -``` + "memory": "50Mi" -Additional configuration can be added as detailed below. + }, -## Generic[​](#generic) + "requests": { -### Datastore type[​](#datastore-type) + "cpu": "10m", -The Calico Enterprise CNI plugin supports the following datastore: + "memory": "50Mi" -- `datastore_type` (kubernetes) + } -### Logging[​](#logging) + } -Logging is always to `stderr`. Logs are also written to `/var/log/calico/cni/cni.log` on each host by default. +} -Logging can be configured using the following options in the netconf. +{ -| Option name | Default | Description | -| -------------------- | ----------------------------- | --------------------------------------------------------------------------------------------------------- | -| `log_level` | INFO | Logging level. Allowed levels are `ERROR`, `WARNING`, `INFO`, and `DEBUG`. | -| `log_file_path` | `/var/log/calico/cni/cni.log` | Location on each host to write CNI log files to. Logging to file can be disabled by removing this option. | -| `log_file_max_size` | 100 | Max file size in MB log files can reach before they are rotated. | -| `log_file_max_age` | 30 | Max age in days that old log files will be kept on the host before they are removed. | -| `log_file_max_count` | 10 | Max number of rotated log files allowed on the host before they are cleaned up. | + "name": "authn-proxy", -```json -{ + "resources": { - "name": "any_name", + "limits": { - "cniVersion": "0.1.0", + "cpu": "250m", - "type": "calico", + "memory": "500Mi" - "log_level": "DEBUG", + }, - "log_file_path": "/var/log/calico/cni/cni.log", + "requests": { - "ipam": { + "cpu": "25m", - "type": "calico-ipam" + "memory": "50Mi" + + } } } ``` -### IPAM[​](#ipam) +### Alertmanager[​](#alertmanager) -When using Calico Enterprise IPAM, the following flags determine what IP addresses should be assigned. NOTE: These flags are strings and not boolean values. +To configure resource specification for the [Alertmanager](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#alertmanager), you can set resources for the default container "prometheus" using the "resources" field under "commonPrometheusFields". For all other injected containers, like "authn-proxy", resource configuration can be set using the "containers" struct, as shown below in the patch command below. -- `assign_ipv4` (default: `"true"`) -- `assign_ipv6` (default: `"false"`) +```bash +kubectl patch monitor tigera-secure --type=merge --patch='{"spec": {"alertManager": {"spec": {"resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}}}}' +``` -A specific IP address can be chosen by using [`CNI_ARGS`](https://github.com/appc/cni/blob/master/SPEC.md#parameters) and setting `IP` to the desired value. +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). -By default, Calico Enterprise IPAM will assign IP addresses from all the available IP pools. +#### Verification[​](#verification-23) -Optionally, the list of possible IPv4 and IPv6 pools can also be specified via the following properties: +You can verify the configured resources using the following command: -- `ipv4_pools`: An array of CIDR strings or pool names. (e.g., `"ipv4_pools": ["10.0.0.0/24", "20.0.0.0/16", "default-ipv4-ippool"]`) -- `ipv6_pools`: An array of CIDR strings or pool names. (e.g., `"ipv6_pools": ["2001:db8::1/120", "namedpool"]`) +```bash +kubectl get statefulset.apps/alertmanager-calico-node-alertmanager -n tigera-prometheus -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` -Example CNI config: +This command will output the configured resource requests and limits for the Alertmanager in JSON format. -```json +> **SECONDARY:** The "config-reloader" container has default resource values set by the Alertmanager resource. + +```bash { - "name": "any_name", + "name": "alertmanager", - "cniVersion": "0.1.0", + "resources": { - "type": "calico", + "limits": { - "ipam": { + "cpu": "1", - "type": "calico-ipam", + "memory": "1000Mi" - "assign_ipv4": "true", + }, - "assign_ipv6": "true", + "requests": { - "ipv4_pools": ["10.0.0.0/24", "20.0.0.0/16", "default-ipv4-ippool"], + "cpu": "100m", - "ipv6_pools": ["2001:db8::1/120", "default-ipv6-ippool"] + "memory": "100Mi" } -} -``` - -> **SECONDARY:** `ipv6_pools` will be respected only when `assign_ipv6` is set to `"true"`. - -Any IP pools specified in the CNI config must have already been created. It is an error to specify IP pools in the config that do not exist. - -### Container settings[​](#container-settings) - -The following options allow configuration of settings within the container namespace. + } -- allow\_ip\_forwarding (default is `false`) +} -```json { - "name": "any_name", + "name": "config-reloader", - "cniVersion": "0.1.0", + "resources": { - "type": "calico", + "limits": { - "ipam": { + "cpu": "10m", - "type": "calico-ipam" + "memory": "50Mi" }, - "container_settings": { + "requests": { - "allow_ip_forwarding": true + "cpu": "10m", + + "memory": "50Mi" + + } } } ``` -### Readiness Gates[​](#readiness-gates) - -The following option makes CNI plugin wait for specified endpoint(s) to be ready before configuring pod networking. - -- `readiness_gates` - -This is an optional property that takes an array of URLs. Each URL specified will be polled for readiness and pod networking will continue startup once all readiness\_gates are ready. - -Example CNI config: - -```json -{ +## PacketCaptureAPI custom resource[​](#packetcaptureapi-custom-resource) - "name": "any_name", +The [PacketCaptureAPI](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#packetcaptureapi) CR provides a way to configure resources for PacketCapture. The following sections provide example configurations for this CR. - "cniVersion": "0.1.0", +### PacketCaptureAPIDeployment[​](#packetcaptureapideployment) - "type": "calico", +To configure resource specification for the [PacketCaptureAPI](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#packetcaptureapi), patch the PacketCapture CR using the below command: - "ipam": { +```bash +kubectl patch packetcaptureapis tigera-secure --type=merge --patch='{"spec": {"packetCaptureAPIDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-packetcapture-server","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' +``` - "type": "calico-ipam" +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). - }, +#### Verification[​](#verification-24) - "readiness_gates": ["http://localhost:9099/readiness", "http://localhost:8888/status"] +You can verify the configured resources using the following command: -} +```bash +kubectl get deployment.apps/tigera-packetcapture -n tigera-packetcapture -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` -## Kubernetes specific[​](#kubernetes-specific) - -When using the Calico Enterprise CNI plugin with Kubernetes, the plugin must be able to access the Kubernetes API server to find the labels assigned to the Kubernetes pods. The recommended way to configure access is through a `kubeconfig` file specified in the `kubernetes` section of the network config. e.g. +This command will output the configured resource requests and limits for the PacketCaptureDeployment in JSON format. -```json +```bash { - "name": "any_name", + "name": "tigera-packetcapture-server", - "cniVersion": "0.1.0", + "resources": { - "type": "calico", + "limits": { - "kubernetes": { + "cpu": "1", - "kubeconfig": "/path/to/kubeconfig" + "memory": "1000Mi" }, - "ipam": { + "requests": { - "type": "calico-ipam" + "cpu": "100m", + + "memory": "100Mi" + + } } } ``` -As a convenience, the API location can also be configured directly, e.g. - -```json -{ - - "name": "any_name", - - "cniVersion": "0.1.0", +## PolicyRecommendation custom resource[​](#policyrecommendation-custom-resource) - "type": "calico", +The [PolicyRecommendation](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#policyrecommendation) CR provides a way to configure resources for PolicyRecommendation. The following sections provide example configurations for this CR. - "kubernetes": { +### PolicyRecommendationDeployment[​](#policyrecommendationdeployment) - "k8s_api_root": "http://127.0.0.1:8080" +To configure resource specification for the [PolicyRecommendationDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#policyrecommendationdeployment), patch the PolicyRecommendation CR using the below command: - }, +```bash +kubectl patch policyrecommendation tigera-secure --type=merge --patch='{"spec": {"policyRecommendationDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"policy-recommendation-controller","resources":{"requests":{"cpu":"100m", "memory":"100Mi"},"limits":{"cpu":"1", "memory":"512Mi"}}}]}}}}}}' +``` - "ipam": { +This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). - "type": "calico-ipam" +#### Verification[​](#verification-25) - } +You can verify the configured resources using the following command: -} +```bash +kubectl get deployment.apps/tigera-policy-recommendation -n calico-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' ``` -### Enabling Kubernetes policy[​](#enabling-kubernetes-policy) - -If you wish to use the Kubernetes `NetworkPolicy` resource then you must set a policy type in the network config. There is a single supported policy type, `k8s`. When set, you must also run `tigera/kube-controllers` with the policy, profile, and workloadendpoint controllers enabled. +This command will output the configured resource requests and limits for the PolicyRecommendationDeployment in JSON format. -```json +```bash { - "name": "any_name", + "name": "policy-recommendation-controller", - "cniVersion": "0.1.0", + "resources": { - "type": "calico", + "limits": { - "policy": { + "cpu": "1", - "type": "k8s" + "memory": "512Mi" }, - "kubernetes": { - - "kubeconfig": "/path/to/kubeconfig" + "requests": { - }, + "cpu": "100m", - "ipam": { + "memory": "100Mi" - "type": "calico-ipam" + } } } ``` -When using `type: k8s`, the Calico Enterprise CNI plugin requires read-only Kubernetes API access to the `Pods` resource in all namespaces. +## Update via Helm[​](#update-via-helm) - +To update configurations during installation via the [Helm chart](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/helm#install-calico-enterprise), modify the values.yaml with the necessary resource values for the components prior to executing the Helm install. -### Enabling policy setup timeout[​](#enabling-policy-setup-timeout) +> **SECONDARY:** The provided example illustrates configuring the apiserver component. Follow a similar approach for other components to update resource requests and limits during installation using the Helm chart. -The Calico Enterprise CNI plugin can be configured to prevent new pods from starting their containers until one of the following conditions occurs: +### APIServer custom resource[​](#apiserver-custom-resource-1) -- The pod's policy has finished being programmed. -- A specified amount of time has elapsed. +The [APIServer](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#apiserver) CR provides a way to configure APIServerDeployment. The following sections provide example values.yaml for apiserver component. -By enabling this feature, you can avoid errors that can occur when a pod tries to start before the pod's policy is programmed by its host. +#### APIServerDeployment[​](#apiserverdeployment-1) - +To configure resource specification for the [APIServerDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#apiserverdeployment), update values.yaml with the appropriate resource values. -**Tab: Operator** +```bash +apiServer: -The policy setup timeout can be configured by setting the `linuxPolicySetupTimeoutSeconds` field in the [calicoNetwork spec](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#caliconetworkspec) of the default `operator.tigera.io/v1/installation` resource. + apiServerDeployment: -The following example configures the CNI to delay a pod from starting its containers for up to 10 seconds, or until the pod's data plane has been programmed: + spec: -```yaml -kind: Installation + template: -apiVersion: operator.tigera.io/v1 + spec: -metadata: + containers: - name: default + - name: calico-apiserver -spec: + resources: - calicoNetwork: + limits: - linuxPolicySetupTimeoutSeconds: 10 -``` + cpu: 1 -**Tab: Manifest** + memory: 1000Mi -The policy setup timeout can be configured by setting the `policy_setup_timeout_seconds` option in the CNI config. + requests: -Example CNI config: + cpu: 100m -```json -{ + memory: 100Mi +``` - "name": "any_name", +You can verify the configured resources using the following command: - "cniVersion": "0.1.0", +```bash +kubectl get deployment.apps/calico-apiserver -n calico-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' +``` - "type": "calico", +### kube-controllers - "policy_setup_timeout_seconds": 10, + - "ipam": { +## [📄️Configuring the Calico Enterprise Kubernetes controllers](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/configuration) - "type": "calico-ipam" +[Calico Enterprise Kubernetes controllers monitor the Kubernetes API and perform actions based on cluster state.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/configuration) - } +## [📄️Monitoring kube-controllers with Prometheus](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/prometheus) -} -``` +[Review metrics for the kube-controllers component if you are using Prometheus.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/prometheus) -The Calico Enterprise CNI plugin reads Felix's `endpoint-status` directory to determine when the data plane has been programmed for a pod. If left unset, the Calico Enterprise CNI plugin will look for the directory at `/var/run/calico/endpoint-status`. The path `/var/run/calico` is commonly mounted to the Calico Enterprise DaemonSet, meaning it can be written to by the Felix container, and read by the (host-namespace) Calico Enterprise CNI plugin. To enable the `endpoint-status` directory, and adjust which directory of the Felix container it is written to, the `endpointStatusPathPrefix` option must be configured for [Felix](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/configuration). +### Configuring the Calico Enterprise Kubernetes controllers -To adjust where the Calico Enterprise CNI plugin looks for the `endpoint-status` directory in the host filesystem, you must set the `endpoint_status_dir` option. + -Example CNI config: +The Calico Enterprise Kubernetes controllers are deployed in a Kubernetes cluster. The different controllers monitor the Kubernetes API and perform actions based on cluster state. -```json -{ + - "name": "any_name", +**Tab: Operator** - "cniVersion": "0.1.0", +If you have installed Calico using the operator, see the [KubeControllersConfiguration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/kubecontrollersconfig) resource instead. - "type": "calico", +**Tab: Manifest** - "policy_setup_timeout_seconds": 10, +The controllers are primarily configured through environment variables. When running the controllers as a Kubernetes pod, this is accomplished through the pod manifest `env` section. - "endpoint_status_dir": "/path/to/endpoint-status", +## The tigera/kube-controllers container[​](#the-tigerakube-controllers-container) - "ipam": { +The `tigera/kube-controllers` container includes the following controllers: - "type": "calico-ipam" +1. node controller: watches for the removal of Kubernetes nodes and removes corresponding data from Calico Enterprise, and optionally watches for node updates to create and sync host endpoints for each node. +2. federation controller: watches Kubernetes services and endpoints locally and across all remote clusters, and programs Kubernetes endpoints for any locally configured service that specifies a service federation selector annotation. - } +### Configuring datastore access[​](#configuring-datastore-access) -} -``` +The datastore type can be configured via the `DATASTORE_TYPE` environment variable. Only supported value is `kubernetes`. - +#### kubernetes[​](#kubernetes) -## IPAM[​](#ipam-1) +When running the controllers as a Kubernetes pod, Kubernetes API access is [configured automatically](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod) and no additional configuration is required. However, the controllers can also be configured to use an explicit [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file override to configure API access if needed. -### Using host-local IPAM[​](#using-host-local-ipam) +| Environment | Description | Schema | +| ------------ | ------------------------------------------------------------------ | ------ | +| `KUBECONFIG` | Path to a Kubernetes kubeconfig file mounted within the container. | path | -Calico can be configured to use [host-local IPAM](https://www.cni.dev/plugins/current/ipam/host-local/) instead of the default `calico-ipam`. Host local IPAM uses a pre-determined CIDR per-host, and stores allocations locally on each node. This is in contrast to Calico IPAM, which dynamically allocates blocks of addresses and single addresses alike in response to cluster needs. +### Other configuration[​](#other-configuration) -Host local IPAM is generally only used on clusters where integration with the Kubernetes [route controller](https://kubernetes.io/docs/concepts/architecture/cloud-controller/#route-controller) is necessary. Note that some Calico features - such as the ability to request a specific address or pool for a pod - require Calico IPAM to function, and will not work with host-local IPAM enabled. +> **SECONDARY:** Whenever possible, prefer configuring the kube-controllers component using the [KubeControllersConfiguration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/kubecontrollersconfig) API resource, Some configuration options may not be available through environment variables. - +The following environment variables can be used to configure the Calico Enterprise Kubernetes controllers. -**Tab: Operator** +| Environment | Description | Schema | Default | +| --------------------- | --------------------------------------------------------------------------- | --------------------------------------------------------- | ----------------------------------------------------- | +| `DATASTORE_TYPE` | Which datastore type to use | etcdv3, kubernetes | kubernetes | +| `ENABLED_CONTROLLERS` | Which controllers to run | namespace, node, policy, serviceaccount, workloadendpoint | policy,namespace,serviceaccount,workloadendpoint,node | +| `LOG_LEVEL` | Minimum log level to be displayed. | debug, info, warning, error | info | +| `KUBECONFIG` | Path to a kubeconfig file for Kubernetes API access | path | | +| `SYNC_NODE_LABELS` | When enabled, Kubernetes node labels will be copied to Calico node objects. | boolean | true | +| `AUTO_HOST_ENDPOINTS` | When set to enabled, automatically create a host endpoint for each node. | enabled, disabled | disabled | -The `host-local` IPAM plugin can be configured by setting the `Spec.CNI.IPAM.Plugin` field to `HostLocal` on the [operator.tigera.io/Installation](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#installation) API. +## About each controller[​](#about-each-controller) -Calico will use the `host-local` IPAM plugin to allocate IPv4 addresses from the node's IPv4 pod CIDR if there is an IPv4 pool configured in `Spec.IPPools`, and an IPv6 address from the node's IPv6 pod CIDR if there is an IPv6 pool configured in `Spec.IPPools`. +### Node controller[​](#node-controller) -The following example configures Calico to assign dual-stack IPs to pods using the host-local IPAM plugin. +The node controller has several functions. -```yaml -kind: Installation +- Garbage collects IP addresses. +- Automatically provisions host endpoints for Kubernetes nodes. -apiVersion: operator.tigera.io/v1 +### Federation controller[​](#federation-controller) -metadata: +The federation controller syncs Kubernetes federated endpoint changes to the Calico Enterprise datastore. The controller must have read access to the Kubernetes API to monitor `Service` and `Endpoints` events, and must also have write access to update `Endpoints`. - name: default +The federation controller is disabled by default if `ENABLED_CONTROLLERS` is not explicitly specified. -spec: +This controller is valid for all Calico Enterprise datastore types. For more details refer to the [Configuring federated services](https://docs.tigera.io/calico-enterprise/latest/multicluster/federation/services-controller) usage guide. - calicoNetwork: + - ipPools: +### Monitoring kube-controllers with Prometheus - - cidr: 192.168.0.0/16 +kube-controllers can be configured to report a number of metrics through Prometheus. This reporting is enabled by default on port 9094. See the [configuration reference](https://docs.tigera.io/calico-enterprise/latest/reference/resources/kubecontrollersconfig) for how to change metrics reporting configuration (or disable it completely). - - cidr: 2001:db8::/64 +## Metric reference[​](#metric-reference) - cni: +#### kube-controllers specific[​](#kube-controllers-specific) - type: Calico +kube-controllers exports a number of Prometheus metrics. The current set is as follows. Since some metrics may be tied to particular implementation choices inside kube-controllers we can't make any hard guarantees that metrics will persist across releases. However, we aim not to make any spurious changes to existing metrics. - ipam: +| Metric Name | Labels | Description | +| ------------------------------------ | --------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `ipam_allocations_in_use` | ippool, node | Number of Calico IP allocations currently in use by a workload or interface. | +| `ipam_allocations_borrowed` | ippool, node | Number of Calico IP allocations currently in use where the allocation was borrowed from a block affine to another node. | +| `ipam_allocations_gc_candidates` | ippool, node | Number of Calico IP allocations currently marked by the GC as potential leaks. This metric returns to zero under normal GC operation. | +| `ipam_allocations_gc_reclamations` | ippool, node | Count of Calico IP allocations that have been reclaimed by the GC. Increase of this counter corresponds with a decrease of the candidates gauge under normal operation. | +| `ipam_blocks` | ippool, node | Number of IPAM blocks. | +| `ipam_ippool_size` | ippool | Number of IP addresses in the IP Pool CIDR. | +| `ipam_blocks_per_node` | node | Number of IPAM blocks, indexed by the node to which they have affinity. Prefer `ipam_blocks` for new integrations. | +| `ipam_allocations_per_node` | node | Number of Calico IP allocations, indexed by node on which the allocation was made. Prefer `ipam_allocations_in_use` for new integrations. | +| `ipam_allocations_borrowed_per_node` | node | Number of Calico IP allocations borrowed from a non-affine block, indexed by node on which the allocation was made. Prefer `ipam_allocations_borrowed` for new integrations. | +| `remote_cluster_connection_status` | remote\_cluster\_name | Status of the remote cluster connection in federation. Represented as numeric values 0 (NotConnecting) ,1 (Connecting), 2 (InSync), 3 (ReSyncInProgress), 4 (ConfigChangeRestartRequired), 5 (ConfigInComplete). | - type: HostLocal +Labels can be interpreted as follows: + +| Label Name | Description | +| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `node` | For allocation metrics, the node on which the allocation was made. For block metrics, the node for which the block has affinity. If the block has no affinity, value will be `no_affinity`. | +| `ippool` | The IP Pool that the IPAM block occupies. If there is no IP Pool which matches the block, value will be `no_ippool`. | +| `remote_cluster_name` | Name of the remote cluster in federation. | + +Prometheus metrics are self-documenting, with metrics turned on, `curl` can be used to list the metrics along with their help text and type information. + +```bash +curl -s http://localhost:9094/metrics | head ``` -**Tab: Manifest** +#### CPU / memory metrics[​](#cpu--memory-metrics) -When using the CNI `host-local` IPAM plugin, two special values - `usePodCidr` and `usePodCidrIPv6` - are allowed for the subnet field (either at the top-level, or in a "range"). This tells the plugin to determine the subnet to use from the Kubernetes API based on the Node.podCIDR field. Calico Enterprise does not use the `gateway` field of a range so that field is not required and it will be ignored if present. +kube-controllers also exports the default set of metrics that Prometheus makes available. Currently, those include: -> **SECONDARY:** `usePodCidr` and `usePodCidrIPv6` can only be used as the value of the `subnet` field, it cannot be used in `rangeStart` or `rangeEnd` so those values are not useful if `subnet` is set to `usePodCidr`. +| Name | Description | +| -------------------------------------------- | ------------------------------------------------------------------ | +| `go_gc_duration_seconds` | A summary of the GC invocation durations. | +| `go_goroutines` | Number of goroutines that currently exist. | +| `go_memstats_alloc_bytes` | Number of bytes allocated and still in use. | +| `go_memstats_alloc_bytes_total` | Total number of bytes allocated, even if freed. | +| `go_memstats_buck_hash_sys_bytes` | Number of bytes used by the profiling bucket hash table. | +| `go_memstats_frees_total` | Total number of frees. | +| `go_memstats_gc_sys_bytes` | Number of bytes used for garbage collection system metadata. | +| `go_memstats_heap_alloc_bytes` | Number of heap bytes allocated and still in use. | +| `go_memstats_heap_idle_bytes` | Number of heap bytes waiting to be used. | +| `go_memstats_heap_inuse_bytes` | Number of heap bytes that are in use. | +| `go_memstats_heap_objects` | Number of allocated objects. | +| `go_memstats_heap_released_bytes_total` | Total number of heap bytes released to OS. | +| `go_memstats_heap_sys_bytes` | Number of heap bytes obtained from system. | +| `go_memstats_last_gc_time_seconds` | Number of seconds since 1970 of last garbage collection. | +| `go_memstats_lookups_total` | Total number of pointer lookups. | +| `go_memstats_mallocs_total` | Total number of mallocs. | +| `go_memstats_mcache_inuse_bytes` | Number of bytes in use by mcache structures. | +| `go_memstats_mcache_sys_bytes` | Number of bytes used for mcache structures obtained from system. | +| `go_memstats_mspan_inuse_bytes` | Number of bytes in use by mspan structures. | +| `go_memstats_mspan_sys_bytes` | Number of bytes used for mspan structures obtained from system. | +| `go_memstats_next_gc_bytes` | Number of heap bytes when next garbage collection will take place. | +| `go_memstats_other_sys_bytes` | Number of bytes used for other system allocations. | +| `go_memstats_stack_inuse_bytes` | Number of bytes in use by the stack allocator. | +| `go_memstats_stack_sys_bytes` | Number of bytes obtained from system for stack allocator. | +| `go_memstats_sys_bytes` | Number of bytes obtained by system. Sum of all system allocations. | +| `process_cpu_seconds_total` | Total user and system CPU time spent in seconds. | +| `process_max_fds` | Maximum number of open file descriptors. | +| `process_open_fds` | Number of open file descriptors. | +| `process_resident_memory_bytes` | Resident memory size in bytes. | +| `process_start_time_seconds` | Start time of the process since Unix epoch in seconds. | +| `process_virtual_memory_bytes` | Virtual memory size in bytes. | +| `promhttp_metric_handler_requests_in_flight` | Current number of scrapes being served. | +| `promhttp_metric_handler_requests_total` | Total number of scrapes by HTTP status code. | -Calico Enterprise supports the host-local IPAM plugin's `routes` field as follows: +### Calico Enterprise node (node) -- If there is no `routes` field, Calico Enterprise will install a default `0.0.0.0/0`, and/or `::/0` route into the pod (depending on whether the pod has an IPv4 and/or IPv6 address). + -- If there is a `routes` field then Calico Enterprise will program *only* the routes in the routes field into the pod. Since Calico Enterprise implements a point-to-point link into the pod, the `gw` field is not required and it will be ignored if present. All routes that Calico Enterprise installs will have Calico Enterprise's link-local IP as the next hop. +## [📄️Configuring node](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/configuration) -Calico Enterprise CNI plugin configuration: +[Customize node using environment variables.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/configuration) -- `node_name` - - The node name to use when looking up the CIDR value (defaults to current hostname) +## [🗃Felix](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/) -```json -{ +[2 items](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/) - "name": "any_name", +### Configuring node - "cniVersion": "0.1.0", + - "type": "calico", +The `node` container is deployed to every node (on Kubernetes, by a DaemonSet), and runs three internal daemons: - "kubernetes": { +- Felix, the Calico daemon that runs on every node and provides endpoints. +- BIRD, the BGP daemon that distributes routing information to other nodes. +- confd, a daemon that watches the Calico datastore for config changes and updates BIRD’s config files. - "kubeconfig": "/path/to/kubeconfig", +For manifest-based installations, `node` is primarily configured through environment variables, typically set in the deployment manifest. Individual nodes may also be updated through the Node custom resource. `node` can also be configured through the Calico Operator. - "node_name": "node-name-in-k8s" +The rest of this page lists the available configuration options, and is followed by specific considerations for various settings. - }, + - "ipam": { +**Tab: Operator** - "type": "host-local", +`node` does not need to be configured directly when installed by the operator. For a complete operator configuration reference, see [the installation API reference documentation](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). - "ranges": [[{ "subnet": "usePodCidr" }], [{ "subnet": "usePodCidrIPv6" }]], +**Tab: Manifest** - "routes": [{ "dst": "0.0.0.0/0" }, { "dst": "2001:db8::/96" }] +## Environment variables[​](#environment-variables) - } +### Configuring the default IP pool(s)[​](#configuring-the-default-ip-pools) -} -``` +Calico uses IP pools to configure how addresses are allocated to pods, and how networking works for certain sets of addresses. You can see the full schema for IP pools here. -When making use of the `usePodCidr` or `usePodCidrIPv6` options, the Calico Enterprise CNI plugin requires read-only Kubernetes API access to the `Nodes` resource. +`node` can be configured to create a default IP pool for you, but only if none already exist in the cluster. The following options control the parameters on the created pool. -#### Configuring node and typha[​](#configuring-node-and-typha) +| Environment | Description | Schema | +| -------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | +| CALICO\_IPV4POOL\_CIDR | The IPv4 Pool to create if none exists at start up. It is invalid to define this variable and NO\_DEFAULT\_POOLS. \[Default: First not used in locally of (192.168.0.0/16, 172.16.0.0/16, .., 172.31.0.0/16) ] | IPv4 CIDR | +| CALICO\_IPV4POOL\_BLOCK\_SIZE | Block size to use for the IPv4 Pool created at startup. Block size for IPv4 should be in the range 20-32 (inclusive) \[Default: `26`] | int | +| CALICO\_IPV4POOL\_IPIP | IPIP Mode to use for the IPv4 Pool created at start up. If set to a value other than `Never`, `CALICO_IPV4POOL_VXLAN` should not be set. \[Default: `Always`] | Always, CrossSubnet, Never ("Off" is also accepted as a synonym for "Never") | +| CALICO\_IPV4POOL\_VXLAN | VXLAN Mode to use for the IPv4 Pool created at start up. If set to a value other than `Never`, `CALICO_IPV4POOL_IPIP` should not be set. \[Default: `Never`] | Always, CrossSubnet, Never | +| CALICO\_IPV4POOL\_NAT\_OUTGOING | Controls NAT Outgoing for the IPv4 Pool created at start up. \[Default: `true`] | boolean | +| CALICO\_IPV4POOL\_NODE\_SELECTOR | Controls the NodeSelector for the IPv4 Pool created at start up. \[Default: `all()`] | [selector](https://docs.tigera.io/calico-enterprise/latest/reference/resources/ippool#node-selector) | +| CALICO\_IPV6POOL\_CIDR | The IPv6 Pool to create if none exists at start up. It is invalid to define this variable and NO\_DEFAULT\_POOLS. \[Default: ``] | IPv6 CIDR | +| CALICO\_IPV6POOL\_BLOCK\_SIZE | Block size to use for the IPv6 POOL created at startup. Block size for IPv6 should be in the range 116-128 (inclusive) \[Default: `122`] | int | +| CALICO\_IPV6POOL\_VXLAN | VXLAN Mode to use for the IPv6 Pool created at start up. \[Default: `Never`] | Always, CrossSubnet, Never | +| CALICO\_IPV6POOL\_NAT\_OUTGOING | Controls NAT Outgoing for the IPv6 Pool created at start up. \[Default: `false`] | boolean | +| CALICO\_IPV6POOL\_NODE\_SELECTOR | Controls the NodeSelector for the IPv6 Pool created at start up. \[Default: `all()`] | [selector](https://docs.tigera.io/calico-enterprise/latest/reference/resources/ippool#node-selector) | +| CALICO\_IPV4POOL\_DISABLE\_BGP\_EXPORT | Disable exporting routes over BGP for the IPv4 Pool created at start up. \[Default: `false`] | boolean | +| CALICO\_IPV6POOL\_DISABLE\_BGP\_EXPORT | Disable exporting routes over BGP for the IPv6 Pool created at start up. \[Default: `false`] | boolean | +| NO\_DEFAULT\_POOLS | Prevents Calico Enterprise from creating a default pool if one does not exist. \[Default: `false`] | boolean | -When using `host-local` IPAM with the Kubernetes API datastore, you must configure both node and the Typha deployment to use the `Node.podCIDR` field by setting the environment variable `USE_POD_CIDR=true` in each. +### Configuring BGP Networking[​](#configuring-bgp-networking) - +BGP configuration for Calico nodes is normally configured through the [Node](https://docs.tigera.io/calico-enterprise/latest/reference/resources/node), [BGPConfiguration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/bgpconfig), and [BGPPeer](https://docs.tigera.io/calico-enterprise/latest/reference/resources/bgppeer) resources. `node` also exposes some options to allow setting certain fields on these objects, as described below. -### Using Kubernetes annotations[​](#using-kubernetes-annotations) +| Environment | Description | Schema | +| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- | +| NODENAME | A unique identifier for this host. See [node name determination](#node-name-determination) for more details. | lowercase string | +| IP | The IPv4 address to assign this host or detection behavior at startup. Refer to [IP setting](#ip-setting) for the details of the behavior possible with this field. | IPv4 | +| IP6 | The IPv6 address to assign this host or detection behavior at startup. Refer to [IP setting](#ip-setting) for the details of the behavior possible with this field. | IPv6 | +| IP\_AUTODETECTION\_METHOD | The method to use to autodetect the IPv4 address for this host. This is only used when the IPv4 address is being autodetected. See [IP Autodetection methods](#ip-autodetection-methods) for details of the valid methods. \[Default: `first-found`] | string | +| IP6\_AUTODETECTION\_METHOD | The method to use to autodetect the IPv6 address for this host. This is only used when the IPv6 address is being autodetected. See [IP Autodetection methods](#ip-autodetection-methods) for details of the valid methods. \[Default: `first-found`] | string | +| AS | The AS number for this node. When specified, the value is saved in the node resource configuration for this host, overriding any previously configured value. When omitted, if an AS number has been previously configured in the node resource, that AS number is used for the peering. When omitted, if an AS number has not yet been configured in the node resource, the node will use the global value (see [example modifying Global BGP settings](https://docs.tigera.io/calico-enterprise/latest/networking/configuring/bgp) for details.) | int | +| CALICO\_ROUTER\_ID | Sets the `router id` to use for BGP if no IPv4 address is set on the node. For an IPv6-only system, this may be set to `hash`. It then uses the hash of the nodename to create a 4 byte router id. See note below. \[Default: \`\`] | string | +| CALICO\_K8S\_NODE\_REF | The name of the corresponding node object in the Kubernetes API. When set, used for correlating this node with events from the Kubernetes API. | string | + +### Configuring Datastore Access[​](#configuring-datastore-access) + +| Environment | Description | Schema | +| --------------- | ------------------------------------------- | ------------------ | +| DATASTORE\_TYPE | Type of datastore. \[Default: `kubernetes`] | kubernetes, etcdv3 | + +#### Configuring Kubernetes Datastore Access[​](#configuring-kubernetes-datastore-access) + +| Environment | Description | Schema | +| ------------------ | ------------------------------------------------------------------------------ | ------ | +| KUBECONFIG | When using the Kubernetes datastore, the location of a kubeconfig file to use. | string | +| K8S\_API\_ENDPOINT | Location of the Kubernetes API. Not required if using kubeconfig. | string | +| K8S\_CERT\_FILE | Location of a client certificate for accessing the Kubernetes API. | string | +| K8S\_KEY\_FILE | Location of a client key for accessing the Kubernetes API. | string | +| K8S\_CA\_FILE | Location of a CA for accessing the Kubernetes API. | string | + +> **SECONDARY:** When Calico Enterprise is configured to use the Kubernetes API as the datastore, the environments used for BGP configuration are ignored—this includes selection of the node AS number (AS) and all of the IP selection options (IP, IP6, IP\_AUTODETECTION\_METHOD, IP6\_AUTODETECTION\_METHOD). -#### Specifying IP pools on a per-namespace or per-pod basis[​](#specifying-ip-pools-on-a-per-namespace-or-per-pod-basis) +### Configuring Logging[​](#configuring-logging) -In addition to specifying IP pools in the CNI config as discussed above, Calico Enterprise IPAM supports specifying IP pools per-namespace or per-pod using the following [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/). +| Environment | Description | Schema | +| ------------------------------ | -------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- | +| CALICO\_DISABLE\_FILE\_LOGGING | Disables logging to file. \[Default: "false"] | string | +| CALICO\_STARTUP\_LOGLEVEL | The log severity above which startup `node` logs are sent to the stdout. \[Default: `ERROR`] | DEBUG, INFO, WARNING, ERROR, CRITICAL, or NONE (case-insensitive) | -- `cni.projectcalico.org/ipv4pools`: A list of configured IPv4 Pools from which to choose an address for the pod. +### Configuring CNI Plugin[​](#configuring-cni-plugin) - Example: +`node` has a few options that are configurable based on the CNI plugin and CNI plugin configuration used on the cluster. - ```yaml - annotations: +| Environment | Description | Schema | +| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------- | +| USE\_POD\_CIDR | Use the Kubernetes `Node.Spec.PodCIDR` field when using host-local IPAM. Requires Kubernetes API datastore. This field is required when using the Kubernetes API datastore with host-local IPAM. \[Default: false] | boolean | +| CALICO\_MANAGE\_CNI | Tells Calico to update the kubeconfig file at /host/etc/cni/net.d/calico-kubeconfig on credentials change. \[Default: true] | boolean | - 'cni.projectcalico.org/ipv4pools': '["default-ipv4-ippool"]' - ``` +### Other Environment Variables[​](#other-environment-variables) -- `cni.projectcalico.org/ipv6pools`: A list of configured IPv6 Pools from which to choose an address for the pod. +| Environment | Description | Schema | +| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- | +| DISABLE\_NODE\_IP\_CHECK | Skips checks for duplicate Node IPs. This can reduce the load on the cluster when a large number of Nodes are restarting. \[Default: `false`] | boolean | +| WAIT\_FOR\_DATASTORE | Wait for connection to datastore before starting. If a successful connection is not made, node will shutdown. \[Default: `false`] | boolean | +| CALICO\_NETWORKING\_BACKEND | The networking backend to use. In `bird` mode, Calico will provide BGP networking using the BIRD BGP daemon; VXLAN networking can also be used. In `vxlan` mode, only VXLAN networking is provided; BIRD and BGP are disabled. If set to `none` (also known as policy-only mode), both BIRD and VXLAN are disabled. \[Default: `bird`] | bird, vxlan, none | +| CLUSTER\_TYPE | Contains comma delimited list of indicators about this cluster. e.g. k8s, mesos, kubeadm, canal, bgp | string | - Example: +## Appendix[​](#appendix) - ```yaml - annotations: +### Node name determination[​](#node-name-determination) - 'cni.projectcalico.org/ipv6pools': '["2001:db8::1/120"]' - ``` +The `node` must know the name of the node on which it is running. The node name is used to retrieve the [Node resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/node) configured for this node if it exists, or to create a new node resource representing the node if it does not. It is also used to associate the node with per-node [BGP configuration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/bgpconfig), [felix configuration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/felixconfig), and endpoints. -If provided, these IP pools will override any IP pools specified in the CNI config. +When launched, the `node` container sets the node name according to the following order of precedence: -> **SECONDARY:** This requires the IP pools to exist before `ipv4pools` or `ipv6pools` annotations are used. Requesting a subset of an IP pool is not supported. IP pools requested in the annotations must exactly match a configured [IPPool](https://docs.tigera.io/calico-enterprise/latest/reference/resources/ippool) resource. +1. The value specified in the `NODENAME` environment variable, if set. +2. The value specified in `/var/lib/calico/nodename`, if it exists. +3. The value specified in the `HOSTNAME` environment variable, if set. +4. The hostname as returned by the operating system, converted to lowercase. -> **SECONDARY:** The Calico Enterprise CNI plugin supports specifying an annotation per namespace. If both the namespace and the pod have this annotation, the pod information will be used. Otherwise, if only the namespace has the annotation the annotation of the namespace will be used for each pod in it. +Once the node has determined its name, the value will be cached in `/var/lib/calico/nodename` for future use. -#### Requesting a specific IP address[​](#requesting-a-specific-ip-address) +For example, if given the following conditions: -You can also request a specific IP address through [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) with Calico Enterprise IPAM. There are two annotations to request a specific IP address: +- `NODENAME=""` +- `/var/lib/calico/nodename` does not exist +- `HOSTNAME="host-A"` +- The operating system returns "host-A.internal.myorg.com" for the hostname -- `cni.projectcalico.org/ipAddrs`: A list of IPv4 and/or IPv6 addresses to assign to the Pod. The requested IP addresses will be assigned from Calico Enterprise IPAM and must exist within a configured IP pool. +node will use "host-a" for its name and will write the value in `/var/lib/calico/nodename`. If node is then restarted, it will use the cached value of "host-a" read from the file on disk. - Example: +### IP setting[​](#ip-setting) - ```yaml - annotations: +The IP (for IPv4) and IP6 (for IPv6) environment variables are used to set, force autodetection, or disable auto detection of the address for the appropriate IP version for the node. When the environment variable is set, the address is saved in the [node resource configuration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/node) for this host, overriding any previously configured value. - 'cni.projectcalico.org/ipAddrs': '["192.168.0.1"]' - ``` +calico/node will attempt to detect subnet information from the host, and augment the provided address if possible. -- `cni.projectcalico.org/ipAddrsNoIpam`: A list of IPv4 and/or IPv6 addresses to assign to the Pod, bypassing IPAM. Any IP conflicts and routing have to be taken care of manually or by some other system. Calico Enterprise will only distribute routes to a Pod if its IP address falls within a Calico Enterprise IP pool using BGP mode. Calico will not distribute ipAddrsNoIpam routes when operating in VXLAN mode. If you assign an IP address that is not in a Calico Enterprise IP pool or if its IP address falls within a Calico Enterprise IP pool that uses VXLAN encapsulation, you must ensure that routing to that IP address is taken care of through another mechanism. +#### IP setting special case values[​](#ip-setting-special-case-values) - Example: +There are several special case values that can be set in the IP(6) environment variables, they are: - ```yaml - annotations: +- Not set or empty string: Any previously set address on the node resource will be used. If no previous address is set on the node resource the two versions behave differently: - 'cni.projectcalico.org/ipAddrsNoIpam': '["10.0.0.1"]' - ``` + - The ipAddrsNoIpam feature is disabled by default. It can be enabled in the feature\_control section of the CNI network config: + - IP will do autodetection of the IPv4 address and set it on the node resource. + - IP6 will not do autodetection. - ```json - { +- `autodetect`: Autodetection will always be performed for the IP address and the detected address will overwrite any value configured in the node resource. - "name": "any_name", +- `none`: Autodetection will not be performed (this is useful to disable IPv4). - "cniVersion": "0.1.0", +### IP autodetection methods[​](#ip-autodetection-methods) - "type": "calico", +When Calico Enterprise is used for routing, each node must be configured with an IPv4 address and/or an IPv6 address that will be used to route between nodes. To eliminate node specific IP address configuration, the `node` container can be configured to autodetect these IP addresses. In many systems, there might be multiple physical interfaces on a host, or possibly multiple IP addresses configured on a physical interface. In these cases, there are multiple addresses to choose from and so autodetection of the correct address can be tricky. - "ipam": { +The IP autodetection methods are provided to improve the selection of the correct address, by limiting the selection based on suitable criteria for your deployment. - "type": "calico-ipam" +The following sections describe the available IP autodetection methods. - }, +#### first-found[​](#first-found) - "feature_control": { +The `first-found` option enumerates all interface IP addresses and returns the first valid IP address (based on IP version and type of address) on the first valid interface. Certain known "local" interfaces are omitted, such as the docker bridge. The order that both the interfaces and the IP addresses are listed is system dependent. - "ip_addrs_no_ipam": true +This is the default detection method. However, since this method only makes a very simplified guess, it is recommended to either configure the node with a specific IP address, or to use one of the other detection methods. - } +e.g. - } - ``` +```text +IP_AUTODETECTION_METHOD=first-found - > **WARNING:** This feature allows for the bypassing of network policy via IP spoofing. Users should make sure the proper admission control is in place to prevent users from selecting arbitrary IP addresses. +IP6_AUTODETECTION_METHOD=first-found +``` -> **SECONDARY:** -> -> - The `ipAddrs` and `ipAddrsNoIpam` annotations can't be used together. -> - You can only specify one IPv4/IPv6 or one IPv4 and one IPv6 address with these annotations. -> - When `ipAddrs` or `ipAddrsNoIpam` is used with `ipv4pools` or `ipv6pools`, `ipAddrs` / `ipAddrsNoIpam` take priority. +#### kubernetes-internal-ip[​](#kubernetes-internal-ip) -#### Requesting a floating IP[​](#requesting-a-floating-ip) +The `kubernetes-internal-ip` method will select the first internal IP address listed in the Kubernetes node's `Status.Addresses` field -You can request a floating IP address for a pod through [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) with Calico Enterprise. +Example: -> **SECONDARY:** The specified address must belong to an IP Pool for advertisement to work properly. +```text +IP_AUTODETECTION_METHOD=kubernetes-internal-ip -- `cni.projectcalico.org/floatingIPs`: A list of floating IPs which will be assigned to the pod's workload endpoint. +IP6_AUTODETECTION_METHOD=kubernetes-internal-ip +``` - Example: +#### can-reach=DESTINATION[​](#can-reachdestination) - ```yaml - annotations: +The `can-reach` method uses your local routing to determine which IP address will be used to reach the supplied destination. Both IP addresses and domain names may be used. - 'cni.projectcalico.org/floatingIPs': '["10.0.0.1"]' - ``` +Example using IP addresses: - The floatingIPs feature is disabled by default. It can be enabled in the feature\_control section of the CNI network config: +```text +IP_AUTODETECTION_METHOD=can-reach=8.8.8.8 - ```json - { +IP6_AUTODETECTION_METHOD=can-reach=2001:4860:4860::8888 +``` - "name": "any_name", +Example using domain names: - "cniVersion": "0.1.0", +```text +IP_AUTODETECTION_METHOD=can-reach=www.google.com - "type": "calico", +IP6_AUTODETECTION_METHOD=can-reach=www.google.com +``` - "ipam": { +#### interface=INTERFACE-REGEX[​](#interfaceinterface-regex) - "type": "calico-ipam" +The `interface` method uses the supplied interface [regular expression](https://pkg.go.dev/regexp) to enumerate matching interfaces and to return the first IP address on the first matching interface. The order that both the interfaces and the IP addresses are listed is system dependent. - }, +Example with valid IP address on interface eth0, eth1, eth2 etc.: - "feature_control": { +```text +IP_AUTODETECTION_METHOD=interface=eth.* - "floating_ips": true +IP6_AUTODETECTION_METHOD=interface=eth.* +``` - } +#### skip-interface=INTERFACE-REGEX[​](#skip-interfaceinterface-regex) - } - ``` +The `skip-interface` method uses the supplied interface [regular expression](https://pkg.go.dev/regexp) to exclude interfaces and to return the first IP address on the first interface that does not match. The order that both the interfaces and the IP addresses are listed is system dependent. - > **WARNING:** This feature can allow pods to receive traffic which may not have been intended for that pod. Users should make sure the proper admission control is in place to prevent users from selecting arbitrary floating IP addresses. +Example with valid IP address on interface exclude enp6s0f0, eth0, eth1, eth2 etc.: -### Using IP pools node selectors[​](#using-ip-pools-node-selectors) +```text +IP_AUTODETECTION_METHOD=skip-interface=enp6s0f0,eth.* -Nodes will only assign workload addresses from IP pools which select them. By default, IP pools select all nodes, but this can be configured using the `nodeSelector` field. Check out the [IP pool resource document](https://docs.tigera.io/calico-enterprise/latest/reference/resources/ippool) for more details. +IP6_AUTODETECTION_METHOD=skip-interface=enp6s0f0,eth.* +``` -Example: +#### cidr=CIDR[​](#cidrcidr) -1. Create (or update) an IP pool that only allocates IPs for nodes where it contains a label `rack=0`. +The `cidr` method will select any IP address from the node that falls within the given CIDRs. For example: - ```bash - kubectl create -f -< -Check out the usage guide on [assign IP addresses based on topology](https://docs.tigera.io/calico-enterprise/latest/networking/ipam/assign-ip-addresses-topology) +### Felix -for a full example. + -### CNI network configuration lists[​](#cni-network-configuration-lists) +## [📄️Configuring Felix](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/configuration) -The CNI 0.3.0 [spec](https://github.com/containernetworking/cni/blob/spec-v0.3.0/SPEC.md#network-configuration-lists) supports "chaining" multiple CNI plugins together. Calico Enterprise supports the following Kubernetes CNI plugins, which are enabled by default. Although chaining other CNI plugins may work, we support only the following tested CNI plugins. +[Configure Felix, the daemon that runs on every machine that provides endpoints.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/configuration) -**Port mapping plugin** +## [📄️Monitoring Felix with Prometheus](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/prometheus) -Calico Enterprise is required to implement Kubernetes host port functionality and is enabled by default. +[Review metrics for the Felix component if you are using Prometheus.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/prometheus) -> **SECONDARY:** Be aware of the following [portmap plugin CNI issue](https://github.com/containernetworking/cni/issues/605) where draining nodes may take a long time with a cluster of 100+ nodes and 4000+ services. +### Configuring Felix -To disable it, remove the portmap section from the CNI network configuration in the Calico Enterprise manifests. + -```json -{ +> **SECONDARY:** The following tables detail the configuration file and environment variable parameters. For `FelixConfiguration` resource settings, refer to [Felix Configuration Resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/felixconfig). - "type": "portmap", +Configuration for Felix is read from one of four possible locations, in order, as follows. - "snat": true, +1. Environment variables. +2. The Felix configuration file. +3. Host-specific `FelixConfiguration` resources (`node.`). +4. The global `FelixConfiguration` resource (`default`). - "capabilities": { "portMappings": true } +The value of any configuration parameter is the value read from the *first* location containing a value. For example, if an environment variable contains a value, it takes top precedence. -} -``` +If not set in any of these locations, most configuration parameters have defaults, and it should be rare to have to explicitly set them. -### Order of precedence[​](#order-of-precedence) +The full list of parameters which can be set is as follows. -If more than one of these methods are used for IP address assignment, they will take on the following precedence, 1 being the highest: +## Spec[​](#spec) -1. Kubernetes annotations -2. CNI configuration -3. IP pool node selectors +### Datastore connection[​](#datastore-connection) -> **SECONDARY:** Calico Enterprise IPAM will not reassign IP addresses to workloads that are already running. To update running workloads with IP addresses from a newly configured IP pool, they must be recreated. We recommend doing this before going into production or during a maintenance window. +#### `DatastoreType` -### Specify num\_queues for veth interfaces[​](#specify-num_queues-for-veth-interfaces) + -`num_rx_queues` and `num_tx_queues` can be set using the `num_queues` option in the CNI configuration. Default: 1 +**Tab: Configuration file** -For example: +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `DatastoreType` | +| Description | Controls which datastore driver Felix will use. Typically, this is detected from the environment and it does not need to be set manually. (For example, if `KUBECONFIG` is set, the kubernetes datastore driver will be used by default). | +| Schema | One of: `etcdv3`, `kubernetes` (case insensitive) | +| Default | `etcdv3` | -```json -{ +**Tab: Environment variable** - "num_queues": 3 +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_DATASTORETYPE` | +| Description | Controls which datastore driver Felix will use. Typically, this is detected from the environment and it does not need to be set manually. (For example, if `KUBECONFIG` is set, the kubernetes datastore driver will be used by default). | +| Schema | One of: `etcdv3`, `kubernetes` (case insensitive) | +| Default | `etcdv3` | -} -``` + -### Configure resource requests and limits +#### `EtcdAddr` -## Big picture[​](#big-picture) + -Resource requests and limits are essential configurations for managing resource allocation and ensuring optimal performance of Kubernetes workloads. In Calico Enterprise, these configurations can be customized using custom resources to meet specific requirements and optimize resource utilization. +**Tab: Configuration file** -> **SECONDARY:** It's important to note that the CPU and memory values used in the examples are for demonstration purposes and should be adjusted based on individual system requirements. To find the list of all applicable containers for a component, please refer to its specification. +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `EtcdAddr` | +| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, the etcd server and port to connect to. If EtcdEndpoints is also specified, it takes precedence. | +| Schema | String matching regex `^[^:/]+:\d+$` | +| Default | `127.0.0.1:2379` | -## APIServer custom resource[​](#apiserver-custom-resource) +**Tab: Environment variable** -The [APIServer](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#apiserver) CR provides a way to configure APIServerDeployment. The following sections provide example configurations for this CR. +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_ETCDADDR` | +| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, the etcd server and port to connect to. If EtcdEndpoints is also specified, it takes precedence. | +| Schema | String matching regex `^[^:/]+:\d+$` | +| Default | `127.0.0.1:2379` | -### APIServerDeployment[​](#apiserverdeployment) + -To configure resource specification for the [APIServerDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#apiserverdeployment), patch the APIServer CR using the below command: +#### `EtcdCaFile` -```bash -kubectl patch apiserver tigera-secure --type=merge --patch='{"spec": {"apiServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-apiserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-queryserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` + -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +**Tab: Configuration file** -#### Verification[​](#verification) +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `EtcdCaFile` | +| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, path to TLS CA file to use when connecting to etcd. If the CA file is specified, the other TLS parameters are mandatory. | +| Schema | Path to file, which must exist | +| Default | none | -You can verify the configured resources using the following command: +**Tab: Environment variable** -```bash -kubectl get deployment.apps/calico-apiserver -n calico-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_ETCDCAFILE` | +| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, path to TLS CA file to use when connecting to etcd. If the CA file is specified, the other TLS parameters are mandatory. | +| Schema | Path to file, which must exist | +| Default | none | -This command will output the configured resource requests and limits for the Calico APIServerDeployment component in JSON format. + -```bash -{ +#### `EtcdCertFile` - "name": "calico-apiserver", + - "resources": { +**Tab: Configuration file** - "limits": { +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `EtcdCertFile` | +| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, path to TLS certificate file to use when connecting to etcd. If the certificate file is specified, the other TLS parameters are mandatory. | +| Schema | Path to file, which must exist | +| Default | none | - "cpu": "1", +**Tab: Environment variable** - "memory": "1000Mi" +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_ETCDCERTFILE` | +| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, path to TLS certificate file to use when connecting to etcd. If the certificate file is specified, the other TLS parameters are mandatory. | +| Schema | Path to file, which must exist | +| Default | none | - }, + - "requests": { +#### `EtcdEndpoints` - "cpu": "100m", + - "memory": "100Mi" +**Tab: Configuration file** - } +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `EtcdEndpoints` | +| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, comma-delimited list of etcd endpoints to connect to, replaces EtcdAddr and EtcdScheme. | +| Schema | List of HTTP endpoints: comma-delimited list of `http(s)://hostname:port` | +| Default | none | - } +**Tab: Environment variable** -} +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_ETCDENDPOINTS` | +| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, comma-delimited list of etcd endpoints to connect to, replaces EtcdAddr and EtcdScheme. | +| Schema | List of HTTP endpoints: comma-delimited list of `http(s)://hostname:port` | +| Default | none | -{ + - "name": "tigera-queryserver", +#### `EtcdKeyFile` - "resources": { + - "limits": { +**Tab: Configuration file** - "cpu": "1", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `EtcdKeyFile` | +| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, path to TLS private key file to use when connecting to etcd. If the key file is specified, the other TLS parameters are mandatory. | +| Schema | Path to file, which must exist | +| Default | none | - "memory": "1000Mi" +**Tab: Environment variable** - }, +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_ETCDKEYFILE` | +| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, path to TLS private key file to use when connecting to etcd. If the key file is specified, the other TLS parameters are mandatory. | +| Schema | Path to file, which must exist | +| Default | none | - "requests": { + - "cpu": "100m", +#### `EtcdScheme` - "memory": "100Mi" + - } +**Tab: Configuration file** - } +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `EtcdScheme` | +| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.EtcdAddr: when using the `etcdv3` datastore driver, the URL scheme to use. If EtcdEndpoints is also specified, it takes precedence. | +| Schema | One of: `http`, `https` (case insensitive) | +| Default | `http` | -} -``` +**Tab: Environment variable** -## ApplicationLayer custom resource[​](#applicationlayer-custom-resource) +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_ETCDSCHEME` | +| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.EtcdAddr: when using the `etcdv3` datastore driver, the URL scheme to use. If EtcdEndpoints is also specified, it takes precedence. | +| Schema | One of: `http`, `https` (case insensitive) | +| Default | `http` | -The [ApplicationLayer](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#applicationlayer) CR provides a way to configure resources for L7LogCollectorDaemonSet. The following sections provide example configurations for this CR. + -### L7LogCollectorDaemonSet[​](#l7logcollectordaemonset) +#### `FelixHostname` -To configure resource specification for the [L7LogCollectorDaemonSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#l7logcollectordaemonset), patch the ApplicationLayer CR using the below command: + -```bash -kubectl patch applicationlayer tigera-secure --type=merge --patch='{"spec": {"l7LogCollectorDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"l7-collector","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"envoy-proxy","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +**Tab: Configuration file** -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FelixHostname` | +| Description | The name of this node, used to identify resources in the datastore that belong to this node. Auto-detected from the node's hostname if not provided. | +| Schema | String matching regex `^[a-zA-Z0-9_.-]+$` | +| Default | none | -#### Verification[​](#verification-1) +**Tab: Environment variable** -You can verify the configured resources using the following command: +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_FELIXHOSTNAME` | +| Description | The name of this node, used to identify resources in the datastore that belong to this node. Auto-detected from the node's hostname if not provided. | +| Schema | String matching regex `^[a-zA-Z0-9_.-]+$` | +| Default | none | -```bash -kubectl get daemonset.apps/l7-log-collector -n calico-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` + -This command will output the configured resource requests and limits for the Calico L7LogCollectorDaemonSet component in JSON format. +#### `TyphaAddr` -```bash -{ + - "name": "envoy-proxy", +**Tab: Configuration file** - "resources": { +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------- | +| Key | `TyphaAddr` | +| Description | If set, tells Felix to connect to Typha at the given address and port. Overrides TyphaK8sServiceName. | +| Schema | String matching regex `^[^:/]+:\d+$` | +| Default | none | - "limits": { +**Tab: Environment variable** - "cpu": "1", +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------- | +| Key | `FELIX_TYPHAADDR` | +| Description | If set, tells Felix to connect to Typha at the given address and port. Overrides TyphaK8sServiceName. | +| Schema | String matching regex `^[^:/]+:\d+$` | +| Default | none | - "memory": "1000Mi" + - }, +#### `TyphaCAFile` - "requests": { + - "cpu": "100m", +**Tab: Configuration file** - "memory": "100Mi" +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `TyphaCAFile` | +| Description | Path to the TLS CA file to use when communicating with Typha. If this parameter is specified, the other TLS parameters must also be specified. For non-cluster hosts, the CA file is extracted from the tigera-ca-bundle ConfigMap under the TyphaK8sNamespace namespace. | +| Schema | Path to file, which must exist | +| Default | none | - } +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_TYPHACAFILE` | +| Description | Path to the TLS CA file to use when communicating with Typha. If this parameter is specified, the other TLS parameters must also be specified. For non-cluster hosts, the CA file is extracted from the tigera-ca-bundle ConfigMap under the TyphaK8sNamespace namespace. | +| Schema | Path to file, which must exist | +| Default | none | -} + -{ +#### `TyphaCN` - "name": "l7-collector", + - "resources": { +**Tab: Configuration file** - "limits": { +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `TyphaCN` | +| Description | Common name to use when authenticating to Typha over TLS. If any TLS parameters are specified then one of TyphaCN and TyphaURISAN must be set. | +| Schema | String | +| Default | none | - "cpu": "1", +**Tab: Environment variable** - "memory": "1000Mi" +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_TYPHACN` | +| Description | Common name to use when authenticating to Typha over TLS. If any TLS parameters are specified then one of TyphaCN and TyphaURISAN must be set. | +| Schema | String | +| Default | none | - }, + - "requests": { +#### `TyphaCertFile` - "cpu": "100m", + - "memory": "100Mi" +**Tab: Configuration file** - } +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `TyphaCertFile` | +| Description | Path to the TLS certificate to use when communicating with Typha. If this parameter is specified, the other TLS parameters must also be specified. For non-cluster hosts, the certificate will be signed by the in-cluster Tigera operator signer. | +| Schema | Path to file, which must exist | +| Default | none | - } +**Tab: Environment variable** -} -``` +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_TYPHACERTFILE` | +| Description | Path to the TLS certificate to use when communicating with Typha. If this parameter is specified, the other TLS parameters must also be specified. For non-cluster hosts, the certificate will be signed by the in-cluster Tigera operator signer. | +| Schema | Path to file, which must exist | +| Default | none | -## Authentication custom resource[​](#authentication-custom-resource) + -The [Authentication](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#authentication) CR provides a way to configure resources for DexDeployment. The following sections provide example configurations for this CR. +#### `TyphaK8sNamespace` -### DexDeployment[​](#dexdeployment) + -To configure resource specification for the [DexDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#dexdeployment), patch the Authentication CR using the below command: +**Tab: Configuration file** -```bash -kubectl patch authentication tigera-secure --type=merge --patch='{"spec": {"dexDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-dex","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------- | +| Key | `TyphaK8sNamespace` | +| Description | Namespace to look in when looking for Typha's service (see TyphaK8sServiceName). | +| Schema | String | +| Default | `kube-system` | -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +**Tab: Environment variable** -#### Verification[​](#verification-2) +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------- | +| Key | `FELIX_TYPHAK8SNAMESPACE` | +| Description | Namespace to look in when looking for Typha's service (see TyphaK8sServiceName). | +| Schema | String | +| Default | `kube-system` | -You can verify the configured resources using the following command: + -```bash -kubectl get deployment.apps/tigera-dex -n tigera-dex -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +#### `TyphaK8sServiceName` -This command will output the configured resource requests and limits for the Calico DexDeployment component in JSON format. + -```bash -{ +**Tab: Configuration file** - "name": "tigera-dex", +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `TyphaK8sServiceName` | +| Description | If set, tells Felix to connect to Typha by looking up the Endpoints of the given Kubernetes Service in namespace specified by TyphaK8sNamespace. | +| Schema | String | +| Default | none | - "resources": { +**Tab: Environment variable** - "limits": { +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_TYPHAK8SSERVICENAME` | +| Description | If set, tells Felix to connect to Typha by looking up the Endpoints of the given Kubernetes Service in namespace specified by TyphaK8sNamespace. | +| Schema | String | +| Default | none | - "cpu": "1", + - "memory": "1000Mi" +#### `TyphaKeyFile` - }, + - "requests": { +**Tab: Configuration file** - "cpu": "100m", +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `TyphaKeyFile` | +| Description | Path to the TLS private key to use when communicating with Typha. If this parameter is specified, the other TLS parameters must also be specified. For non-cluster hosts, the private key is generated locally and rotated when the certificate expires. | +| Schema | Path to file, which must exist | +| Default | none | - "memory": "100Mi" +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_TYPHAKEYFILE` | +| Description | Path to the TLS private key to use when communicating with Typha. If this parameter is specified, the other TLS parameters must also be specified. For non-cluster hosts, the private key is generated locally and rotated when the certificate expires. | +| Schema | Path to file, which must exist | +| Default | none | - } + -} -``` +#### `TyphaReadTimeout` -## Compliance custom resource[​](#compliance-custom-resource) + -The [Compliance](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#compliance) CR provides a way to configure resources for ComplianceControllerDeployment, ComplianceSnapshotterDeployment, ComplianceBenchmarkerDaemonSet, ComplianceServerDeployment, ComplianceReporterPodTemplate. The following sections provide example configurations for this CR. +**Tab: Configuration file** -Example Configurations: +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `TyphaReadTimeout` | +| Description | Read timeout when reading from the Typha connection. If typha sends no data for this long, Felix will exit and restart. (Note that Typha sends regular pings so traffic is always expected.) | +| Schema | Seconds (floating point) | +| Default | `30` | -### ComplianceControllerDeployment[​](#compliancecontrollerdeployment) +**Tab: Environment variable** -To configure resource specification for the [ComplianceControllerDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#compliancecontrollerdeployment), patch the Compliance CR using the below command: +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_TYPHAREADTIMEOUT` | +| Description | Read timeout when reading from the Typha connection. If typha sends no data for this long, Felix will exit and restart. (Note that Typha sends regular pings so traffic is always expected.) | +| Schema | Seconds (floating point) | +| Default | `30` | -```bash -kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceControllerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-controller","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` + -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +#### `TyphaURISAN` -#### Verification[​](#verification-3) + -You can verify the configured resources using the following command: +**Tab: Configuration file** -```bash -kubectl get deployment.apps/compliance-controller -n tigera-compliance -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `TyphaURISAN` | +| Description | URI SAN to use when authenticating to Typha over TLS. If any TLS parameters are specified then one of TyphaCN and TyphaURISAN must be set. | +| Schema | String | +| Default | none | -This command will output the configured resource requests and limits for the ComplianceControllerDeployment component in JSON format. +**Tab: Environment variable** -```bash -{ +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_TYPHAURISAN` | +| Description | URI SAN to use when authenticating to Typha over TLS. If any TLS parameters are specified then one of TyphaCN and TyphaURISAN must be set. | +| Schema | String | +| Default | none | - "name": "compliance-controller", + - "resources": { +#### `TyphaWriteTimeout` - "limits": { + - "cpu": "1", +**Tab: Configuration file** - "memory": "1000Mi" +| Attribute | Value | +| ----------- | ----------------------------------------- | +| Key | `TyphaWriteTimeout` | +| Description | Write timeout when writing data to Typha. | +| Schema | Seconds (floating point) | +| Default | `10` | - }, +**Tab: Environment variable** - "requests": { +| Attribute | Value | +| ----------- | ----------------------------------------- | +| Key | `FELIX_TYPHAWRITETIMEOUT` | +| Description | Write timeout when writing data to Typha. | +| Schema | Seconds (floating point) | +| Default | `10` | - "cpu": "100m", + - "memory": "100Mi" +### Process: Feature detection/overrides[​](#process-feature-detectionoverrides) - } +#### `FeatureDetectOverride` - } + -} -``` +**Tab: Configuration file** -### ComplianceSnapshotterDeployment[​](#compliancesnapshotterdeployment) +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FeatureDetectOverride` | +| Description | Used to override feature detection based on auto-detected platform capabilities. Values are specified in a comma separated list with no spaces, example; "SNATFullyRandom=true,MASQFullyRandom=false,RestoreSupportsLock=". A value of "true" or "false" will force enable/disable feature, empty or omitted values fall back to auto-detection. | +| Schema | Comma-delimited list of key=value pairs | +| Default | none | -To configure resource specification for the [ComplianceSnapshotterDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#compliancesnapshotterdeployment), patch the Compliance CR using the below command: +**Tab: Environment variable** -```bash -kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceSnapshotterDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-snapshotter","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_FEATUREDETECTOVERRIDE` | +| Description | Used to override feature detection based on auto-detected platform capabilities. Values are specified in a comma separated list with no spaces, example; "SNATFullyRandom=true,MASQFullyRandom=false,RestoreSupportsLock=". A value of "true" or "false" will force enable/disable feature, empty or omitted values fall back to auto-detection. | +| Schema | Comma-delimited list of key=value pairs | +| Default | none | -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + -#### Verification[​](#verification-4) +#### `FeatureGates` -You can verify the configured resources using the following command: + -```bash -kubectl get deployment.apps/compliance-snapshotter -n tigera-compliance -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +**Tab: Configuration file** -This command will output the configured resource requests and limits for the ComplianceSnapshotterDeployment in JSON format. +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FeatureGates` | +| Description | Used to enable or disable tech-preview Calico features. Values are specified in a comma separated list with no spaces, example; "BPFConnectTimeLoadBalancingWorkaround=enabled,XyZ=false". This is used to enable features that are not fully production ready. | +| Schema | Comma-delimited list of key=value pairs | +| Default | none | -```bash -{ +**Tab: Environment variable** - "name": "compliance-snapshotter", +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_FEATUREGATES` | +| Description | Used to enable or disable tech-preview Calico features. Values are specified in a comma separated list with no spaces, example; "BPFConnectTimeLoadBalancingWorkaround=enabled,XyZ=false". This is used to enable features that are not fully production ready. | +| Schema | Comma-delimited list of key=value pairs | +| Default | none | - "resources": { + - "limits": { +### Process: Go runtime[​](#process-go-runtime) - "cpu": "1", +#### `GoGCThreshold` - "memory": "1000Mi" + - }, +**Tab: Configuration file** - "requests": { +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `GoGCThreshold` | +| Description | Sets the Go runtime's garbage collection threshold. I.e. the percentage that the heap is allowed to grow before garbage collection is triggered. In general, doubling the value halves the CPU time spent doing GC, but it also doubles peak GC memory overhead. A special value of -1 can be used to disable GC entirely; this should only be used in conjunction with the GoMemoryLimitMB setting.This setting is overridden by the GOGC environment variable. | +| Schema | Integer: \[-1,263-1] | +| Default | `40` | - "cpu": "100m", +**Tab: Environment variable** - "memory": "100Mi" +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_GOGCTHRESHOLD` | +| Description | Sets the Go runtime's garbage collection threshold. I.e. the percentage that the heap is allowed to grow before garbage collection is triggered. In general, doubling the value halves the CPU time spent doing GC, but it also doubles peak GC memory overhead. A special value of -1 can be used to disable GC entirely; this should only be used in conjunction with the GoMemoryLimitMB setting.This setting is overridden by the GOGC environment variable. | +| Schema | Integer: \[-1,263-1] | +| Default | `40` | - } + - } +#### `GoMaxProcs` -} -``` + -### ComplianceBenchmarkerDaemonSet[​](#compliancebenchmarkerdaemonset) +**Tab: Configuration file** -To configure resource specification for the [ComplianceBenchmarkerDaemonSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#compliancebenchmarkerdaemonset), patch the Compliance CR using the below command: +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `GoMaxProcs` | +| Description | Sets the maximum number of CPUs that the Go runtime will use concurrently. A value of -1 means "use the system default"; typically the number of real CPUs on the system.this setting is overridden by the GOMAXPROCS environment variable. | +| Schema | Integer: \[-1,263-1] | +| Default | `-1` | -```bash -kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceBenchmarkerDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-benchmarker","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +**Tab: Environment variable** -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_GOMAXPROCS` | +| Description | Sets the maximum number of CPUs that the Go runtime will use concurrently. A value of -1 means "use the system default"; typically the number of real CPUs on the system.this setting is overridden by the GOMAXPROCS environment variable. | +| Schema | Integer: \[-1,263-1] | +| Default | `-1` | -#### Verification[​](#verification-5) + -You can verify the configured resources using the following command: +#### `GoMemoryLimitMB` -```bash -kubectl get daemonset.apps/compliance-benchmarker -n tigera-compliance -o json |jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` + -```bash -{ +**Tab: Configuration file** - "name": "compliance-benchmarker", +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `GoMemoryLimitMB` | +| Description | Sets a (soft) memory limit for the Go runtime in MB. The Go runtime will try to keep its memory usage under the limit by triggering GC as needed. To avoid thrashing, it will exceed the limit if GC starts to take more than 50% of the process's CPU time. A value of -1 disables the memory limit.Note that the memory limit, if used, must be considerably less than any hard resource limit set at the container or pod level. This is because felix is not the only process that must run in the container or pod.This setting is overridden by the GOMEMLIMIT environment variable. | +| Schema | Integer: \[-1,263-1] | +| Default | `-1` | - "resources": { +**Tab: Environment variable** - "limits": { +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_GOMEMORYLIMITMB` | +| Description | Sets a (soft) memory limit for the Go runtime in MB. The Go runtime will try to keep its memory usage under the limit by triggering GC as needed. To avoid thrashing, it will exceed the limit if GC starts to take more than 50% of the process's CPU time. A value of -1 disables the memory limit.Note that the memory limit, if used, must be considerably less than any hard resource limit set at the container or pod level. This is because felix is not the only process that must run in the container or pod.This setting is overridden by the GOMEMLIMIT environment variable. | +| Schema | Integer: \[-1,263-1] | +| Default | `-1` | - "cpu": "1", + - "memory": "1000Mi" +### Process: Health port and timeouts[​](#process-health-port-and-timeouts) - }, +#### `HealthEnabled` - "requests": { + - "cpu": "100m", +**Tab: Configuration file** - "memory": "100Mi" +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `HealthEnabled` | +| Description | If set to true, enables Felix's health port, which provides readiness and liveness endpoints. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - } +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_HEALTHENABLED` | +| Description | If set to true, enables Felix's health port, which provides readiness and liveness endpoints. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | -} -``` + -This command will output the configured resource requests and limits for the ComplianceBenchmarkerDaemonSet in JSON format. +#### `HealthHost` -### ComplianceServerDeployment[​](#complianceserverdeployment) + -To configure resource specification for the [ComplianceServerDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#complianceserverdeployment), patch the Compliance CR using the below command: +**Tab: Configuration file** -```bash -kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-server","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +| Attribute | Value | +| ----------- | ------------------------------------------------ | +| Key | `HealthHost` | +| Description | The host that the health server should bind to. | +| Schema | String matching regex `^[a-zA-Z0-9:._+-]{1,64}$` | +| Default | `localhost` | -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +**Tab: Environment variable** -#### Verification[​](#verification-6) +| Attribute | Value | +| ----------- | ------------------------------------------------ | +| Key | `FELIX_HEALTHHOST` | +| Description | The host that the health server should bind to. | +| Schema | String matching regex `^[a-zA-Z0-9:._+-]{1,64}$` | +| Default | `localhost` | -You can verify the configured resources using the following command: + -```bash -kubectl get deployment.apps/compliance-server -n tigera-compliance -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +#### `HealthPort` -This command will output the configured resource requests and limits for the ComplianceServerDeployment in JSON format. + -```bash -{ +**Tab: Configuration file** - "name": "compliance-server", +| Attribute | Value | +| ----------- | --------------------------------------------------- | +| Key | `HealthPort` | +| Description | The TCP port that the health server should bind to. | +| Schema | Integer: \[0,65535] | +| Default | `9099` | - "resources": { +**Tab: Environment variable** - "limits": { +| Attribute | Value | +| ----------- | --------------------------------------------------- | +| Key | `FELIX_HEALTHPORT` | +| Description | The TCP port that the health server should bind to. | +| Schema | Integer: \[0,65535] | +| Default | `9099` | - "cpu": "1", + - "memory": "1000Mi" +#### `HealthTimeoutOverrides` - }, + - "requests": { +**Tab: Configuration file** - "cpu": "100m", +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `HealthTimeoutOverrides` | +| Description | Allows the internal watchdog timeouts of individual subcomponents to be overridden. This is useful for working around "false positive" liveness timeouts that can occur in particularly stressful workloads or if CPU is constrained. For a list of active subcomponents, see Felix's logs. | +| Schema | Comma-delimited list of `=` pairs, where durations use Go's standard format (e.g. 1s, 1m, 1h3m2s) | +| Default | none | - "memory": "100Mi" +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_HEALTHTIMEOUTOVERRIDES` | +| Description | Allows the internal watchdog timeouts of individual subcomponents to be overridden. This is useful for working around "false positive" liveness timeouts that can occur in particularly stressful workloads or if CPU is constrained. For a list of active subcomponents, see Felix's logs. | +| Schema | Comma-delimited list of `=` pairs, where durations use Go's standard format (e.g. 1s, 1m, 1h3m2s) | +| Default | none | - } + -} -``` +### Process: Logging[​](#process-logging) -### ComplianceReporterPodTemplate.[​](#compliancereporterpodtemplate) +#### `LogDebugFilenameRegex` -To configure resource specification for the [ComplianceReporterPodTemplate](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#compliancereporterpodtemplate), patch the Compliance CR using the below command: + -```bash -kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceReporterPodTemplate": {"template": {"spec": {"containers":[{"name":"reporter","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}' -``` +**Tab: Configuration file** -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `LogDebugFilenameRegex` | +| Description | Controls which source code files have their Debug log output included in the logs. Only logs from files with names that match the given regular expression are included. The filter only applies to Debug level logs. | +| Schema | Regular expression | +| Default | none | -#### Verification[​](#verification-7) +**Tab: Environment variable** -You can verify the configured resources using the following command: +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_LOGDEBUGFILENAMEREGEX` | +| Description | Controls which source code files have their Debug log output included in the logs. Only logs from files with names that match the given regular expression are included. The filter only applies to Debug level logs. | +| Schema | Regular expression | +| Default | none | -```bash -kubectl get Podtemplates tigera.io.report -n tigera-compliance -o json | jq '.template.spec.containers[] | {name: .name, resources: .resources}' -``` + -This command will output the configured resource requests and limits for the ComplianceReporterPodTemplate component in JSON format. +#### `LogDropActionOverride` -```bash -{ + - "name": "reporter", +**Tab: Configuration file** - "resources": { +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `LogDropActionOverride` | +| Description | Specifies whether or not to include the DropActionOverride in the logs when it is triggered. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "limits": { +**Tab: Environment variable** - "cpu": "1", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_LOGDROPACTIONOVERRIDE` | +| Description | Specifies whether or not to include the DropActionOverride in the logs when it is triggered. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "memory": "1000Mi" + - }, +#### `LogFilePath` - "requests": { + - "cpu": "100m", +**Tab: Configuration file** - "memory": "100Mi" +| Attribute | Value | +| ----------- | -------------------------------------------------------------------- | +| Key | `LogFilePath` | +| Description | The full path to the Felix log. Set to none to disable file logging. | +| Schema | Path to file | +| Default | `/var/log/calico/felix.log` | - } +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | -------------------------------------------------------------------- | +| Key | `FELIX_LOGFILEPATH` | +| Description | The full path to the Felix log. Set to none to disable file logging. | +| Schema | Path to file | +| Default | `/var/log/calico/felix.log` | -} -``` + -## Installation custom resource[​](#installation-custom-resource) +#### `LogPrefix` -The [Installation CR](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api) provides a way to configure resources for various Calico Enterprise components, including TyphaDeployment, calicoNodeDaemonSet, CalicoNodeWindowsDaemonSet, csiNodeDriverDaemonSet and KubeControllersDeployment. The following sections provide example configurations for this CR. + -### TyphaDeployment[​](#typhadeployment) +**Tab: Configuration file** -To configure resource specification for the [TyphaDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#typhadeployment), patch the installation CR using the below command: +| Attribute | Value | +| ----------- | -------------------------------------------------------- | +| Key | `LogPrefix` | +| Description | The log prefix that Felix uses when rendering LOG rules. | +| Schema | String | +| Default | `calico-packet` | -```bash -kubectl patch installations default --type=merge --patch='{"spec": {"typhaDeployment": {"spec": {"template": {"spec": {"containers": [{"name": "calico-typha", "resources": {"requests": {"cpu": "100m", "memory": "100Mi"}, "limits": {"cpu": "1", "memory": "1000Mi"}}}]}}}}}}' -``` +**Tab: Environment variable** -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +| Attribute | Value | +| ----------- | -------------------------------------------------------- | +| Key | `FELIX_LOGPREFIX` | +| Description | The log prefix that Felix uses when rendering LOG rules. | +| Schema | String | +| Default | `calico-packet` | -#### Verification[​](#verification-8) + -You can verify the configured resources using the following command: +#### `LogSeverityFile` -```bash -kubectl get deployment.apps/calico-typha -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' -``` + -This command will output the configured resource requests and limits for the Calico TyphaDeployment component in JSON format. +**Tab: Configuration file** -```bash -{ +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------- | +| Key | `LogSeverityFile` | +| Description | The log severity above which logs are sent to the log file. | +| Schema | One of: `DEBUG`, `ERROR`, `FATAL`, `INFO`, `TRACE`, `WARNING` (case insensitive) | +| Default | `INFO` | - "name": "calico-typha", +**Tab: Environment variable** - "resources": { +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------- | +| Key | `FELIX_LOGSEVERITYFILE` | +| Description | The log severity above which logs are sent to the log file. | +| Schema | One of: `DEBUG`, `ERROR`, `FATAL`, `INFO`, `TRACE`, `WARNING` (case insensitive) | +| Default | `INFO` | - "limits": { + - "cpu": "1", +#### `LogSeverityScreen` - "memory": "1000Mi" + - }, +**Tab: Configuration file** - "requests": { +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------- | +| Key | `LogSeverityScreen` | +| Description | The log severity above which logs are sent to the stdout. | +| Schema | One of: `DEBUG`, `ERROR`, `FATAL`, `INFO`, `TRACE`, `WARNING` (case insensitive) | +| Default | `INFO` | - "cpu": "100m", +**Tab: Environment variable** - "memory": "100Mi" +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------- | +| Key | `FELIX_LOGSEVERITYSCREEN` | +| Description | The log severity above which logs are sent to the stdout. | +| Schema | One of: `DEBUG`, `ERROR`, `FATAL`, `INFO`, `TRACE`, `WARNING` (case insensitive) | +| Default | `INFO` | - } + - } +#### `LogSeveritySys` -} -``` + -### CalicoNodeDaemonSet[​](#caliconodedaemonset) +**Tab: Configuration file** -To configure resource requests for the [calicoNodeDaemonSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#caliconodedaemonset) component, patch the installation CR using the below command: +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------- | +| Key | `LogSeveritySys` | +| Description | The log severity above which logs are sent to the syslog. Set to None for no logging to syslog. | +| Schema | One of: `DEBUG`, `ERROR`, `FATAL`, `INFO`, `TRACE`, `WARNING` (case insensitive) | +| Default | `INFO` | -```bash -kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' -``` +**Tab: Environment variable** -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------- | +| Key | `FELIX_LOGSEVERITYSYS` | +| Description | The log severity above which logs are sent to the syslog. Set to None for no logging to syslog. | +| Schema | One of: `DEBUG`, `ERROR`, `FATAL`, `INFO`, `TRACE`, `WARNING` (case insensitive) | +| Default | `INFO` | -#### Verification[​](#verification-9) + -You can verify the configured resources using the following command: +### Process: Prometheus metrics[​](#process-prometheus-metrics) -```bash -kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' -``` +#### `PrometheusGoMetricsEnabled` -This command will output the configured resource requests and limits for the Calico calicoNodeDaemonSet component in JSON format. + -```bash -{ +**Tab: Configuration file** - "name": "calico-node", +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `PrometheusGoMetricsEnabled` | +| Description | Disables Go runtime metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | - "resources": { +**Tab: Environment variable** - "limits": { +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_PROMETHEUSGOMETRICSENABLED` | +| Description | Disables Go runtime metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | - "cpu": "1", + - "memory": "1000Mi" +#### `PrometheusMetricsCAFile` - }, + - "requests": { +**Tab: Configuration file** - "cpu": "100m", +| Attribute | Value | +| ----------- | -------------------------------------------------------------- | +| Key | `PrometheusMetricsCAFile` | +| Description | The path to the TLS CA file for the Prometheus metrics server. | +| Schema | Path to file, which must exist | +| Default | none | - "memory": "100Mi" +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | -------------------------------------------------------------- | +| Key | `FELIX_PROMETHEUSMETRICSCAFILE` | +| Description | The path to the TLS CA file for the Prometheus metrics server. | +| Schema | Path to file, which must exist | +| Default | none | - } + -} -``` +#### `PrometheusMetricsCertFile` -### calicoNodeWindowsDaemonSet[​](#caliconodewindowsdaemonset) + -To configure resource requests for the [calicoNodeWindowsDaemonSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#caliconodewindowsdaemonset) component, patch the installation CR using the below command: +**Tab: Configuration file** -```bash -kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeWindowsDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node-windows","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' -``` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------- | +| Key | `PrometheusMetricsCertFile` | +| Description | The path to the TLS certificate file for the Prometheus metrics server. | +| Schema | Path to file, which must exist | +| Default | none | -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +**Tab: Environment variable** -#### Verification[​](#verification-10) +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------- | +| Key | `FELIX_PROMETHEUSMETRICSCERTFILE` | +| Description | The path to the TLS certificate file for the Prometheus metrics server. | +| Schema | Path to file, which must exist | +| Default | none | -You can verify the configured resources using the following command: + -```bash -kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' -``` +#### `PrometheusMetricsEnabled` -This command will output the configured resource requests and limits for the Calico calicoNodeWindowsDaemonSet component in JSON format. + -```bash -{ +**Tab: Configuration file** - "name": "calico-node", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `PrometheusMetricsEnabled` | +| Description | Enables the Prometheus metrics server in Felix if set to true. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "resources": { +**Tab: Environment variable** - "limits": { +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_PROMETHEUSMETRICSENABLED` | +| Description | Enables the Prometheus metrics server in Felix if set to true. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "cpu": "1", + - "memory": "1000Mi" +#### `PrometheusMetricsHost` - }, + - "requests": { +**Tab: Configuration file** - "cpu": "100m", +| Attribute | Value | +| ----------- | ----------------------------------------------------------- | +| Key | `PrometheusMetricsHost` | +| Description | The host that the Prometheus metrics server should bind to. | +| Schema | String matching regex `^[a-zA-Z0-9:._+-]{1,64}$` | +| Default | none | - "memory": "100Mi" +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | ----------------------------------------------------------- | +| Key | `FELIX_PROMETHEUSMETRICSHOST` | +| Description | The host that the Prometheus metrics server should bind to. | +| Schema | String matching regex `^[a-zA-Z0-9:._+-]{1,64}$` | +| Default | none | - } + -} -``` +#### `PrometheusMetricsKeyFile` -### CalicoKubeControllersDeployment[​](#calicokubecontrollersdeployment) + -To configure resource requests for the [CalicoKubeControllersDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#calicokubecontrollersdeployment) component, patch the installation CR using the below command: +**Tab: Configuration file** -```bash -kubectl patch installations default --type=merge --patch='{"spec": {"calicoKubeControllersDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-kube-controllers","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' -``` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------- | +| Key | `PrometheusMetricsKeyFile` | +| Description | The path to the TLS private key file for the Prometheus metrics server. | +| Schema | Path to file, which must exist | +| Default | none | -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +**Tab: Environment variable** -#### Verification[​](#verification-11) +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------- | +| Key | `FELIX_PROMETHEUSMETRICSKEYFILE` | +| Description | The path to the TLS private key file for the Prometheus metrics server. | +| Schema | Path to file, which must exist | +| Default | none | -You can verify the configured resources using the following command: + -```bash -kubectl get deployment.apps/calico-kube-controllers -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' -``` +#### `PrometheusMetricsPort` -This command will output the configured resource requests and limits for the Calico CalicoKubeControllersDeployment component in JSON format. + -```bash -{ +**Tab: Configuration file** - "name": "calico-kube-controllers", +| Attribute | Value | +| ----------- | --------------------------------------------------------------- | +| Key | `PrometheusMetricsPort` | +| Description | The TCP port that the Prometheus metrics server should bind to. | +| Schema | Integer: \[0,65535] | +| Default | `9091` | - "resources": { +**Tab: Environment variable** - "limits": { +| Attribute | Value | +| ----------- | --------------------------------------------------------------- | +| Key | `FELIX_PROMETHEUSMETRICSPORT` | +| Description | The TCP port that the Prometheus metrics server should bind to. | +| Schema | Integer: \[0,65535] | +| Default | `9091` | - "cpu": "1", + - "memory": "1000Mi" +#### `PrometheusProcessMetricsEnabled` - }, + - "requests": { +**Tab: Configuration file** - "cpu": "100m", +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `PrometheusProcessMetricsEnabled` | +| Description | Disables process metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | - "memory": "100Mi" +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_PROMETHEUSPROCESSMETRICSENABLED` | +| Description | Disables process metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | - } + -} +#### `PrometheusWireGuardMetricsEnabled` -``` + -### CSINodeDriverDaemonSet[​](#csinodedriverdaemonset) +**Tab: Configuration file** -To configure resource requests for the [CSINodeDriverDaemonSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#csinodedriverdaemonset) component, patch the installation CR using the below command: +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `PrometheusWireGuardMetricsEnabled` | +| Description | Disables wireguard metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | -```bash -kubectl patch installations default --type=merge --patch='{"spec": {"csiNodeDriverDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-csi","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}},{"name":"csi-node-driver-registrar","resources":{"requests":{"cpu":"50m", "memory":"50Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}' -``` +**Tab: Environment variable** -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_PROMETHEUSWIREGUARDMETRICSENABLED` | +| Description | Disables wireguard metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | -#### Verification[​](#verification-12) + -You can verify the configured resources using the following command: +### Data plane: Common[​](#data-plane-common) -```bash -kubectl get daemonset.apps/csi-node-driver -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}' -``` +#### `AllowIPIPPacketsFromWorkloads` -This command will output the configured resource requests and limits for the Calico calicoNodeDaemonSet component in JSON format. + -```bash -{ +**Tab: Configuration file** - "name": "calico-csi", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `AllowIPIPPacketsFromWorkloads` | +| Description | Controls whether Felix will add a rule to drop IPIP encapsulated traffic from workloads. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "resources": { +**Tab: Environment variable** - "limits": { +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_ALLOWIPIPPACKETSFROMWORKLOADS` | +| Description | Controls whether Felix will add a rule to drop IPIP encapsulated traffic from workloads. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "cpu": "1", + - "memory": "1000Mi" +#### `AllowVXLANPacketsFromWorkloads` - }, + - "requests": { +**Tab: Configuration file** - "cpu": "100m", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `AllowVXLANPacketsFromWorkloads` | +| Description | Controls whether Felix will add a rule to drop VXLAN encapsulated traffic from workloads. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "memory": "100Mi" +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_ALLOWVXLANPACKETSFROMWORKLOADS` | +| Description | Controls whether Felix will add a rule to drop VXLAN encapsulated traffic from workloads. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - } + -} +#### `CgroupV2Path` -{ + - "name": "csi-node-driver-registrar", +**Tab: Configuration file** - "resources": { +| Attribute | Value | +| ----------- | ------------------------------------------------------------------ | +| Key | `CgroupV2Path` | +| Description | Overrides the default location where to find the cgroup hierarchy. | +| Schema | String | +| Default | none | - "limits": { +**Tab: Environment variable** - "cpu": "1", +| Attribute | Value | +| ----------- | ------------------------------------------------------------------ | +| Key | `FELIX_CGROUPV2PATH` | +| Description | Overrides the default location where to find the cgroup hierarchy. | +| Schema | String | +| Default | none | - "memory": "1000Mi" + - }, +#### `ChainInsertMode` - "requests": { + - "cpu": "50m", +**Tab: Configuration file** - "memory": "50Mi" +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `ChainInsertMode` | +| Description | Controls whether Felix hooks the kernel's top-level iptables chains by inserting a rule at the top of the chain or by appending a rule at the bottom. insert is the safe default since it prevents Calico's rules from being bypassed. If you switch to append mode, be sure that the other rules in the chains signal acceptance by falling through to the Calico rules, otherwise the Calico policy will be bypassed. | +| Schema | One of: `append`, `insert` (case insensitive) | +| Default | `insert` | - } +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_CHAININSERTMODE` | +| Description | Controls whether Felix hooks the kernel's top-level iptables chains by inserting a rule at the top of the chain or by appending a rule at the bottom. insert is the safe default since it prevents Calico's rules from being bypassed. If you switch to append mode, be sure that the other rules in the chains signal acceptance by falling through to the Calico rules, otherwise the Calico policy will be bypassed. | +| Schema | One of: `append`, `insert` (case insensitive) | +| Default | `insert` | -} -``` + -## IntrusionDetection custom resource[​](#intrusiondetection-custom-resource) +#### `DataplaneDriver` -The [IntrusionDetection](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#intrusiondetection) CR provides a way to configure resources for IntrusionDetectionControllerDeployment. The following sections provide example configurations for this CR. + -### IntrusionDetectionControllerDeployment.[​](#intrusiondetectioncontrollerdeployment) +**Tab: Configuration file** -To configure resource specification for the [IntrusionDetectionControllerDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#intrusiondetectioncontrollerdeployment), patch the IntrusionDetection CR using the below command: +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `DataplaneDriver` | +| Description | Filename of the external dataplane driver to use. Only used if UseInternalDataplaneDriver is set to false. | +| Schema | Path to executable, which must exist. If not an absolute path, the directory containing this binary and the system path will be searched. | +| Default | `calico-iptables-plugin` | -```bash -kubectl patch intrusiondetection tigera-secure --type=merge --patch='{"spec": {"intrusionDetectionControllerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"webhooks-processor","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}},{"name":"controller","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}}]}}}}}}' -``` +**Tab: Environment variable** -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_DATAPLANEDRIVER` | +| Description | Filename of the external dataplane driver to use. Only used if UseInternalDataplaneDriver is set to false. | +| Schema | Path to executable, which must exist. If not an absolute path, the directory containing this binary and the system path will be searched. | +| Default | `calico-iptables-plugin` | -#### Verification[​](#verification-13) + -You can verify the configured resources using the following command: +#### `DataplaneWatchdogTimeout` -```bash -kubectl get deployment.apps/intrusion-detection-controller -n tigera-intrusion-detection -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` + -This command will output the configured resource requests and limits for the IntrusionDetectionControllerDeployment in JSON format. +**Tab: Configuration file** -```bash -{ +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `DataplaneWatchdogTimeout` | +| Description | The readiness/liveness timeout used for Felix's (internal) dataplane driver. Deprecated: replaced by the generic HealthTimeoutOverrides. | +| Schema | Seconds (floating point) | +| Default | `90` (1m30s) | - "name": "controller", +**Tab: Environment variable** - "resources": { +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_DATAPLANEWATCHDOGTIMEOUT` | +| Description | The readiness/liveness timeout used for Felix's (internal) dataplane driver. Deprecated: replaced by the generic HealthTimeoutOverrides. | +| Schema | Seconds (floating point) | +| Default | `90` (1m30s) | - "limits": { + - "cpu": "1", +#### `DefaultEndpointToHostAction` - "memory": "1000Mi" + - }, +**Tab: Configuration file** - "requests": { +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `DefaultEndpointToHostAction` | +| Description | Controls what happens to traffic that goes from a workload endpoint to the host itself (after the endpoint's egress policy is applied). By default, Calico blocks traffic from workload endpoints to the host itself with an iptables "DROP" action. If you want to allow some or all traffic from endpoint to host, set this parameter to RETURN or ACCEPT. Use RETURN if you have your own rules in the iptables "INPUT" chain; Calico will insert its rules at the top of that chain, then "RETURN" packets to the "INPUT" chain once it has completed processing workload endpoint egress policy. Use ACCEPT to unconditionally accept packets from workloads after processing workload endpoint egress policy. | +| Schema | One of: `ACCEPT`, `DROP`, `RETURN` (case insensitive) | +| Default | `DROP` | - "cpu": "100m", +**Tab: Environment variable** - "memory": "1000Mi" +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_DEFAULTENDPOINTTOHOSTACTION` | +| Description | Controls what happens to traffic that goes from a workload endpoint to the host itself (after the endpoint's egress policy is applied). By default, Calico blocks traffic from workload endpoints to the host itself with an iptables "DROP" action. If you want to allow some or all traffic from endpoint to host, set this parameter to RETURN or ACCEPT. Use RETURN if you have your own rules in the iptables "INPUT" chain; Calico will insert its rules at the top of that chain, then "RETURN" packets to the "INPUT" chain once it has completed processing workload endpoint egress policy. Use ACCEPT to unconditionally accept packets from workloads after processing workload endpoint egress policy. | +| Schema | One of: `ACCEPT`, `DROP`, `RETURN` (case insensitive) | +| Default | `DROP` | - } + - } +#### `DeviceRouteProtocol` -} + -{ +**Tab: Configuration file** - "name": "webhooks-processor", +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------- | +| Key | `DeviceRouteProtocol` | +| Description | Controls the protocol to set on routes programmed by Felix. The protocol is an 8-bit label used to identify the owner of the route. | +| Schema | Integer | +| Default | `3` | - "resources": { +**Tab: Environment variable** - "limits": { +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_DEVICEROUTEPROTOCOL` | +| Description | Controls the protocol to set on routes programmed by Felix. The protocol is an 8-bit label used to identify the owner of the route. | +| Schema | Integer | +| Default | `3` | - "cpu": "1", + - "memory": "1000Mi" +#### `DeviceRouteSourceAddress` - }, + - "requests": { +**Tab: Configuration file** - "cpu": "100m", +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `DeviceRouteSourceAddress` | +| Description | IPv4 address to set as the source hint for routes programmed by Felix. When not set the source address for local traffic from host to workload will be determined by the kernel. | +| Schema | IPv4 address | +| Default | none | - "memory": "1000Mi" +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_DEVICEROUTESOURCEADDRESS` | +| Description | IPv4 address to set as the source hint for routes programmed by Felix. When not set the source address for local traffic from host to workload will be determined by the kernel. | +| Schema | IPv4 address | +| Default | none | - } + -} -``` +#### `DeviceRouteSourceAddressIPv6` -## LogCollector custom resource[​](#logcollector-custom-resource) + -The [LogCollector](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#logcollector) CR provides a way to configure resources for FluentdDaemonSet, EKSLogForwarderDeployment. +**Tab: Configuration file** -### FluentdDaemonSet.[​](#fluentddaemonset) +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `DeviceRouteSourceAddressIPv6` | +| Description | IPv6 address to set as the source hint for routes programmed by Felix. When not set the source address for local traffic from host to workload will be determined by the kernel. | +| Schema | IPv6 address | +| Default | none | -To configure resource specification for the [FluentdDaemonSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#fluentddaemonset), patch the LogCollector CR using the below command: +**Tab: Environment variable** -```bash -kubectl patch logcollector tigera-secure --type=merge --patch='{"spec": {"fluentdDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"fluentd","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_DEVICEROUTESOURCEADDRESSIPV6` | +| Description | IPv6 address to set as the source hint for routes programmed by Felix. When not set the source address for local traffic from host to workload will be determined by the kernel. | +| Schema | IPv6 address | +| Default | none | -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + -#### Verification[​](#verification-14) +#### `DisableConntrackInvalidCheck` -You can verify the configured resources using the following command: + -```bash -kubectl get daemonset.apps/fluentd-node -n tigera-fluentd -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +**Tab: Configuration file** -This command will output the configured resource requests and limits for the FluentdDaemonSet in JSON format. +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `DisableConntrackInvalidCheck` | +| Description | Disables the check for invalid connections in conntrack. While the conntrack invalid check helps to detect malicious traffic, it can also cause issues with certain multi-NIC scenarios. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | -```bash -{ +**Tab: Environment variable** - "name": "fluentd", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_DISABLECONNTRACKINVALIDCHECK` | +| Description | Disables the check for invalid connections in conntrack. While the conntrack invalid check helps to detect malicious traffic, it can also cause issues with certain multi-NIC scenarios. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "resources": { + - "limits": { +#### `DropActionOverride` - "cpu": "1", + - "memory": "1000Mi" +**Tab: Configuration file** - }, +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `DropActionOverride` | +| Description | Overrides the Drop action in Felix, optionally changing the behavior to Accept, and optionally adding Log. Possible values are Drop, LogAndDrop, Accept, LogAndAccept. | +| Schema | One of: `ACCEPT`, `DROP`, `LOGandACCEPT`, `LOGandDROP` (case insensitive) | +| Default | `DROP` | - "requests": { +**Tab: Environment variable** - "cpu": "100m", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_DROPACTIONOVERRIDE` | +| Description | Overrides the Drop action in Felix, optionally changing the behavior to Accept, and optionally adding Log. Possible values are Drop, LogAndDrop, Accept, LogAndAccept. | +| Schema | One of: `ACCEPT`, `DROP`, `LOGandACCEPT`, `LOGandDROP` (case insensitive) | +| Default | `DROP` | - "memory": "100Mi" + - } +#### `EndpointStatusPathPrefix` - } + -} -``` +**Tab: Configuration file** -### EKSLogForwarderDeployment.[​](#ekslogforwarderdeployment) +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `EndpointStatusPathPrefix` | +| Description | The path to the directory where endpoint status will be written. Endpoint status file reporting is disabled if field is left empty.Chosen directory should match the directory used by the CNI plugin for PodStartupDelay. | +| Schema | Path to file | +| Default | `/var/run/calico` | -To configure resource specification for the [EKSLogForwarderDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#ekslogforwarderdeployment), patch the LogCollector CR using the below command: +**Tab: Environment variable** -```bash -kubectl patch logcollector tigera-secure --type=merge --patch='{"spec": {"eksLogForwarderDeployment": {"spec": {"template": {"spec": {"containers":[{"name":"eks-log-forwarder","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_ENDPOINTSTATUSPATHPREFIX` | +| Description | The path to the directory where endpoint status will be written. Endpoint status file reporting is disabled if field is left empty.Chosen directory should match the directory used by the CNI plugin for PodStartupDelay. | +| Schema | Path to file | +| Default | `/var/run/calico` | -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + -#### Verification[​](#verification-15) +#### `ExternalNodesCIDRList` -You can verify the configured resources using the following command: + -```bash -kubectl get deployment.apps/eks-log-forwarder -n tigera-fluentd -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +**Tab: Configuration file** -This command will output the configured resource requests and limits for the EKSLogForwarderDeployment in JSON format. +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `ExternalNodesCIDRList` | +| Description | A list of CIDR's of external, non-Calico nodes from which VXLAN/IPIP overlay traffic will be allowed. By default, external tunneled traffic is blocked to reduce attack surface. | +| Schema | Comma-delimited list of CIDRs | +| Default | none | -```bash -{ +**Tab: Environment variable** - "name": "eks-log-forwarder", +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_EXTERNALNODESCIDRLIST` | +| Description | A list of CIDR's of external, non-Calico nodes from which VXLAN/IPIP overlay traffic will be allowed. By default, external tunneled traffic is blocked to reduce attack surface. | +| Schema | Comma-delimited list of CIDRs | +| Default | none | - "resources": { + - "limits": { +#### `FailsafeInboundHostPorts` - "cpu": "1", + - "memory": "1000Mi" +**Tab: Configuration file** - }, +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FailsafeInboundHostPorts` | +| Description | A list of ProtoPort struct objects including UDP/TCP/SCTP ports and CIDRs that Felix will allow incoming traffic to host endpoints on irrespective of the security policy. This is useful to avoid accidentally cutting off a host with incorrect configuration. For backwards compatibility, if the protocol is not specified, it defaults to "tcp". If a CIDR is not specified, it will allow traffic from all addresses. To disable all inbound host ports, use the value "\[]". The default value allows ssh access, DHCP, BGP, etcd and the Kubernetes API. | +| Schema | Comma-delimited list of numeric ports with optional protocol and CIDR:`(tcp\|udp)::`, `(tcp\|udp):` or ``. IPv6 CIDRs must be enclosed in square brackets. | +| Default | `tcp:22,udp:68,tcp:179,tcp:2379,tcp:2380,tcp:5473,tcp:6443,tcp:6666,tcp:6667` | - "requests": { +**Tab: Environment variable** - "cpu": "100m", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_FAILSAFEINBOUNDHOSTPORTS` | +| Description | A list of ProtoPort struct objects including UDP/TCP/SCTP ports and CIDRs that Felix will allow incoming traffic to host endpoints on irrespective of the security policy. This is useful to avoid accidentally cutting off a host with incorrect configuration. For backwards compatibility, if the protocol is not specified, it defaults to "tcp". If a CIDR is not specified, it will allow traffic from all addresses. To disable all inbound host ports, use the value "\[]". The default value allows ssh access, DHCP, BGP, etcd and the Kubernetes API. | +| Schema | Comma-delimited list of numeric ports with optional protocol and CIDR:`(tcp\|udp)::`, `(tcp\|udp):` or ``. IPv6 CIDRs must be enclosed in square brackets. | +| Default | `tcp:22,udp:68,tcp:179,tcp:2379,tcp:2380,tcp:5473,tcp:6443,tcp:6666,tcp:6667` | - "memory": "100Mi" + - } +#### `FailsafeOutboundHostPorts` - } + -} -``` +**Tab: Configuration file** -## LogStorage custom resource[​](#logstorage-custom-resource) +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FailsafeOutboundHostPorts` | +| Description | A list of PortProto struct objects including UDP/TCP/SCTP ports and CIDRs that Felix will allow outgoing traffic from host endpoints to irrespective of the security policy. This is useful to avoid accidentally cutting off a host with incorrect configuration. For backwards compatibility, if the protocol is not specified, it defaults to "tcp". If a CIDR is not specified, it will allow traffic from all addresses. To disable all outbound host ports, use the value "\[]". The default value opens etcd's standard ports to ensure that Felix does not get cut off from etcd as well as allowing DHCP, DNS, BGP and the Kubernetes API. | +| Schema | Comma-delimited list of numeric ports with optional protocol and CIDR:`(tcp\|udp)::`, `(tcp\|udp):` or ``. IPv6 CIDRs must be enclosed in square brackets. | +| Default | `udp:53,udp:67,tcp:179,tcp:2379,tcp:2380,tcp:5473,tcp:6443,tcp:6666,tcp:6667` | -The [LogStorage](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#logstorage) CR provides a way to configure resources for ECKOperatorStatefulSet, Kibana, LinseedDeployment, ElasticsearchMetricsDeployment. The following sections provide example configurations for this CR. +**Tab: Environment variable** -### ECKOperatorStatefulSet.[​](#eckoperatorstatefulset) +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_FAILSAFEOUTBOUNDHOSTPORTS` | +| Description | A list of PortProto struct objects including UDP/TCP/SCTP ports and CIDRs that Felix will allow outgoing traffic from host endpoints to irrespective of the security policy. This is useful to avoid accidentally cutting off a host with incorrect configuration. For backwards compatibility, if the protocol is not specified, it defaults to "tcp". If a CIDR is not specified, it will allow traffic from all addresses. To disable all outbound host ports, use the value "\[]". The default value opens etcd's standard ports to ensure that Felix does not get cut off from etcd as well as allowing DHCP, DNS, BGP and the Kubernetes API. | +| Schema | Comma-delimited list of numeric ports with optional protocol and CIDR:`(tcp\|udp)::`, `(tcp\|udp):` or ``. IPv6 CIDRs must be enclosed in square brackets. | +| Default | `udp:53,udp:67,tcp:179,tcp:2379,tcp:2380,tcp:5473,tcp:6443,tcp:6666,tcp:6667` | -To configure resource specification for the [ECKOperatorStatefulSet](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#eckoperatorstatefulset), patch the LogStorage CR using the below command: + -```bash -kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"eckOperatorStatefulSet":{"spec": {"template": {"spec": {"containers":[{"name":"manager","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +#### `FloatingIPs` -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + -#### Verification[​](#verification-16) +**Tab: Configuration file** -You can verify the configured resources using the following command: +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FloatingIPs` | +| Description | Configures whether or not Felix will program non-OpenStack floating IP addresses. (OpenStack-derived floating IPs are always programmed, regardless of this setting.) | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Disabled` | -```bash -kubectl get statefulset.apps/elastic-operator -n tigera-eck-operator -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +**Tab: Environment variable** -This command will output the configured resource requests and limits for the ECKOperatorStatefulSet in JSON format. +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_FLOATINGIPS` | +| Description | Configures whether or not Felix will program non-OpenStack floating IP addresses. (OpenStack-derived floating IPs are always programmed, regardless of this setting.) | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Disabled` | -```bash -{ + - "name": "manager", +#### `IPForwarding` - "resources": { + - "limits": { +**Tab: Configuration file** - "cpu": "1", +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `IPForwarding` | +| Description | Controls whether Felix sets the host sysctls to enable IP forwarding. IP forwarding is required when using Calico for workload networking. This should be disabled only on hosts where Calico is used solely for host protection. In BPF mode, due to a kernel interaction, either IPForwarding must be enabled or BPFEnforceRPF must be disabled. | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Enabled` | - "memory": "1000Mi" +**Tab: Environment variable** - }, +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_IPFORWARDING` | +| Description | Controls whether Felix sets the host sysctls to enable IP forwarding. IP forwarding is required when using Calico for workload networking. This should be disabled only on hosts where Calico is used solely for host protection. In BPF mode, due to a kernel interaction, either IPForwarding must be enabled or BPFEnforceRPF must be disabled. | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Enabled` | - "requests": { + - "cpu": "100m", +#### `InterfaceExclude` - "memory": "100Mi" + - } +**Tab: Configuration file** - } +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `InterfaceExclude` | +| Description | A comma-separated list of interface names that should be excluded when Felix is resolving host endpoints. The default value ensures that Felix ignores Kubernetes' internal `kube-ipvs0` device. If you want to exclude multiple interface names using a single value, the list supports regular expressions. For regular expressions you must wrap the value with `/`. For example having values `/^kube/,veth1` will exclude all interfaces that begin with `kube` and also the interface `veth1`. | +| Schema | Comma-delimited list of Linux interface names/regex patterns. Regex patterns must start/end with `/`. | +| Default | `kube-ipvs0` | -} -``` +**Tab: Environment variable** -### Kibana[​](#kibana) +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_INTERFACEEXCLUDE` | +| Description | A comma-separated list of interface names that should be excluded when Felix is resolving host endpoints. The default value ensures that Felix ignores Kubernetes' internal `kube-ipvs0` device. If you want to exclude multiple interface names using a single value, the list supports regular expressions. For regular expressions you must wrap the value with `/`. For example having values `/^kube/,veth1` will exclude all interfaces that begin with `kube` and also the interface `veth1`. | +| Schema | Comma-delimited list of Linux interface names/regex patterns. Regex patterns must start/end with `/`. | +| Default | `kube-ipvs0` | -To configure resource specification for the [Kibana](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#kibana), patch the LogStorage CR using the below command: + -```bash -kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"kibana":{"spec": {"template": {"spec": {"containers":[{"name":"kibana","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +#### `InterfacePrefix` -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + -#### Verification[​](#verification-17) +**Tab: Configuration file** -You can verify the configured resources using the following command: +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `InterfacePrefix` | +| Description | The interface name prefix that identifies workload endpoints and so distinguishes them from host endpoint interfaces. Note: in environments other than bare metal, the orchestrators configure this appropriately. For example our Kubernetes and Docker integrations set the 'cali' value, and our OpenStack integration sets the 'tap' value. | +| Schema | String matching regex `^[a-zA-Z0-9_-]{1,15}(,[a-zA-Z0-9_-]{1,15})*$` | +| Default | `cali` | -```bash -kubectl get deployment.apps/tigera-secure-kb -n tigera-kibana -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +**Tab: Environment variable** -This command will output the configured resource requests and limits for the Kibana in JSON format. +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_INTERFACEPREFIX` | +| Description | The interface name prefix that identifies workload endpoints and so distinguishes them from host endpoint interfaces. Note: in environments other than bare metal, the orchestrators configure this appropriately. For example our Kubernetes and Docker integrations set the 'cali' value, and our OpenStack integration sets the 'tap' value. | +| Schema | String matching regex `^[a-zA-Z0-9_-]{1,15}(,[a-zA-Z0-9_-]{1,15})*$` | +| Default | `cali` | -```bash -{ + - "name": "kibana", +#### `InterfaceRefreshInterval` - "resources": { + - "limits": { +**Tab: Configuration file** - "cpu": "1", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------- | +| Key | `InterfaceRefreshInterval` | +| Description | The period at which Felix rescans local interfaces to verify their state. The rescan can be disabled by setting the interval to 0. | +| Schema | Seconds (floating point) | +| Default | `90` (1m30s) | - "memory": "1000Mi" +**Tab: Environment variable** - }, +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_INTERFACEREFRESHINTERVAL` | +| Description | The period at which Felix rescans local interfaces to verify their state. The rescan can be disabled by setting the interval to 0. | +| Schema | Seconds (floating point) | +| Default | `90` (1m30s) | - "requests": { + - "cpu": "100m", +#### `Ipv6Support` - "memory": "100Mi" + - } +**Tab: Configuration file** - } +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `Ipv6Support` | +| Description | Controls whether Felix enables support for IPv6 (if supported by the in-use dataplane). | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | -} -``` +**Tab: Environment variable** -### LinseedDeployment[​](#linseeddeployment) +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_IPV6SUPPORT` | +| Description | Controls whether Felix enables support for IPv6 (if supported by the in-use dataplane). | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | -To configure resource specification for the [LinseedDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#linseeddeployment), patch the LogStorage CR using the below command: + -```bash -kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"linseedDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-linseed","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +#### `IstioAmbientMode` -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + -#### Verification[​](#verification-18) +**Tab: Configuration file** -You can verify the configured resources using the following command: +| Attribute | Value | +| ----------- | ------------------------------------------------------------------- | +| Key | `IstioAmbientMode` | +| Description | Configures Felix to work together with Tigera's Istio distribution. | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Disabled` | -```bash -kubectl get deployment.apps/tigera-linseed -n tigera-elasticsearch -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +**Tab: Environment variable** -This command will output the configured resource requests and limits for the LinseedDeployment in JSON format. +| Attribute | Value | +| ----------- | ------------------------------------------------------------------- | +| Key | `FELIX_ISTIOAMBIENTMODE` | +| Description | Configures Felix to work together with Tigera's Istio distribution. | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Disabled` | -```bash -{ + - "name": "tigera-linseed", +#### `IstioDSCPMark` - "resources": { + - "limits": { +**Tab: Configuration file** - "cpu": "1", +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `IstioDSCPMark` | +| Description | Sets the value to use when directing traffic to Istio ZTunnel, when Istio is enabled. The mark is set only on SYN packets at the final hop to avoid interference with other protocols. This value is reserved by Calico and must not be used with other Istio installation. | +| Schema | Numeric value: An integer from 0 to 63, representing the 6-bit DSCP code directly; Named value: A case-insensitive string corresponding to a standardized DSCP name (e.g., "CS0", "AF11", "AF21", "EF", etc.) as defined in the IANA registry for Differentiated Services Field Codepoints. | +| Default | `23` | - "memory": "1000Mi" +**Tab: Environment variable** - }, +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_ISTIODSCPMARK` | +| Description | Sets the value to use when directing traffic to Istio ZTunnel, when Istio is enabled. The mark is set only on SYN packets at the final hop to avoid interference with other protocols. This value is reserved by Calico and must not be used with other Istio installation. | +| Schema | Numeric value: An integer from 0 to 63, representing the 6-bit DSCP code directly; Named value: A case-insensitive string corresponding to a standardized DSCP name (e.g., "CS0", "AF11", "AF21", "EF", etc.) as defined in the IANA registry for Differentiated Services Field Codepoints. | +| Default | `23` | - "requests": { + - "cpu": "100m", +#### `KubeMasqueradeBit` - "memory": "100Mi" + - } +**Tab: Configuration file** - } +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `KubeMasqueradeBit` | +| Description | Should be set to the same value as --iptables-masquerade-bit of kube-proxy when TPROXY is used. The default is the same as kube-proxy default thus only needs a change if kube-proxy is using a non-standard setting. Must be within the range of 0-31. | +| Schema | Integer | +| Default | `14` | -} -``` +**Tab: Environment variable** -### ElasticsearchMetricsDeployment[​](#elasticsearchmetricsdeployment) +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_KUBEMASQUERADEBIT` | +| Description | Should be set to the same value as --iptables-masquerade-bit of kube-proxy when TPROXY is used. The default is the same as kube-proxy default thus only needs a change if kube-proxy is using a non-standard setting. Must be within the range of 0-31. | +| Schema | Integer | +| Default | `14` | -To configure resource specification for the [ElasticsearchMetricsDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#elasticsearchmetricsdeployment), patch the LogStorage CR using the below command: + -```bash -kubectl patch logstorage tigera-secure --type=merge --patch='{"spec": {"elasticsearchMetricsDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-elasticsearch-metrics","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}}]}}}}}}' -``` +#### `MTUIfacePattern` -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + -#### Verification[​](#verification-19) +**Tab: Configuration file** -You can verify the configured resources using the following command: +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `MTUIfacePattern` | +| Description | A regular expression that controls which interfaces Felix should scan in order to calculate the host's MTU. This should not match workload interfaces (usually named cali...). | +| Schema | Regular expression | +| Default | `^((en\|wl\|ww\|sl\|ib)[Pcopsvx].*\|(eth\|wlan\|wwan).*)` | -```bash -kubectl get deployment.apps/tigera-elasticsearch-metrics -n tigera-elasticsearch -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +**Tab: Environment variable** -This command will output the configured resource requests and limits for the ElasticsearchMetricsDeployment in JSON format. +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_MTUIFACEPATTERN` | +| Description | A regular expression that controls which interfaces Felix should scan in order to calculate the host's MTU. This should not match workload interfaces (usually named cali...). | +| Schema | Regular expression | +| Default | `^((en\|wl\|ww\|sl\|ib)[Pcopsvx].*\|(eth\|wlan\|wwan).*)` | -```bash -{ + - "name": "tigera-elasticsearch-metrics", +#### `NATOutgoingAddress` - "resources": { + - "limits": { +**Tab: Configuration file** - "cpu": "1", +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `NATOutgoingAddress` | +| Description | Specifies an address to use when performing source NAT for traffic in a natOutgoing pool that is leaving the network. By default the address used is an address on the interface the traffic is leaving on (i.e. it uses the iptables MASQUERADE target). | +| Schema | IPv4 address | +| Default | none | - "memory": "1000Mi" +**Tab: Environment variable** - }, +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_NATOUTGOINGADDRESS` | +| Description | Specifies an address to use when performing source NAT for traffic in a natOutgoing pool that is leaving the network. By default the address used is an address on the interface the traffic is leaving on (i.e. it uses the iptables MASQUERADE target). | +| Schema | IPv4 address | +| Default | none | - "requests": { + - "cpu": "100m", +#### `NATOutgoingExclusions` - "memory": "1000Mi" + - } +**Tab: Configuration file** - } +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `NATOutgoingExclusions` | +| Description | When a IP pool setting `natOutgoing` is true, packets sent from Calico networked containers in this IP pool to destinations will be masqueraded. Configure which type of destinations is excluded from being masqueraded. - IPPoolsOnly: destinations outside of this IP pool will be masqueraded. - IPPoolsAndHostIPs: destinations outside of this IP pool and all hosts will be masqueraded. | +| Schema | One of: `IPPoolsAndHostIPs`, `IPPoolsOnly` (case insensitive) | +| Default | `IPPoolsOnly` | -} -``` +**Tab: Environment variable** -## ManagementClusterConnection custom resource[​](#managementclusterconnection-custom-resource) +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_NATOUTGOINGEXCLUSIONS` | +| Description | When a IP pool setting `natOutgoing` is true, packets sent from Calico networked containers in this IP pool to destinations will be masqueraded. Configure which type of destinations is excluded from being masqueraded. - IPPoolsOnly: destinations outside of this IP pool will be masqueraded. - IPPoolsAndHostIPs: destinations outside of this IP pool and all hosts will be masqueraded. | +| Schema | One of: `IPPoolsAndHostIPs`, `IPPoolsOnly` (case insensitive) | +| Default | `IPPoolsOnly` | -The [ManagementClusterConnection](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#managementclusterconnection) CR provides a way to configure resources for GuardianDeployment. The following sections provide example configurations for this CR. + -### GuardianDeployment[​](#guardiandeployment) +#### `NATPortRange` -To configure resource specification for the [GuardianDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#guardiandeployment), patch the ManagementClusterConnection CR using the below command: + -```bash -kubectl patch managementclusterconnection tigera-secure --type=merge --patch='{"spec": {"guardianDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-guardian","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +**Tab: Configuration file** -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `NATPortRange` | +| Description | Specifies the range of ports that is used for port mapping when doing outgoing NAT. When unset the default behavior of the network stack is used. | +| Schema | Port range: either a single number in \[0,65535] or a range of numbers `n:m` | +| Default | none | -#### Verification[​](#verification-20) +**Tab: Environment variable** -You can verify the configured resources using the following command: +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_NATPORTRANGE` | +| Description | Specifies the range of ports that is used for port mapping when doing outgoing NAT. When unset the default behavior of the network stack is used. | +| Schema | Port range: either a single number in \[0,65535] or a range of numbers `n:m` | +| Default | none | -```bash -kubectl get deployment.apps/tigera-guardian -n tigera-guardian -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` + -This command will output the configured resource requests and limits for the GuardianDeployment in JSON format. +#### `NFTablesDNSPolicyMode` -```bash -{ + - "name": "tigera-guardian", +**Tab: Configuration file** - "resources": { +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `NFTablesDNSPolicyMode` | +| Description | Specifies how DNS policy programming will be handled for NFTables. DelayDeniedPacket - Felix delays any denied packet that traversed a policy that included egress domain matches, but did not match. The packet is released after a fixed time, or after the destination IP address was programmed. DelayDNSResponse - Felix delays any DNS response until related IPSets are programmed. This introduces some latency to all DNS packets (even when no IPSet programming is required), but it ensures policy hit statistics are accurate. This is the recommended setting when you are making use of staged policies or policy rule hit statistics. NoDelay - Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs. | +| Schema | One of: `DelayDNSResponse`, `DelayDeniedPacket`, `NoDelay` (case insensitive) | +| Default | `DelayDeniedPacket` | - "limits": { +**Tab: Environment variable** - "cpu": "1", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_NFTABLESDNSPOLICYMODE` | +| Description | Specifies how DNS policy programming will be handled for NFTables. DelayDeniedPacket - Felix delays any denied packet that traversed a policy that included egress domain matches, but did not match. The packet is released after a fixed time, or after the destination IP address was programmed. DelayDNSResponse - Felix delays any DNS response until related IPSets are programmed. This introduces some latency to all DNS packets (even when no IPSet programming is required), but it ensures policy hit statistics are accurate. This is the recommended setting when you are making use of staged policies or policy rule hit statistics. NoDelay - Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs. | +| Schema | One of: `DelayDNSResponse`, `DelayDeniedPacket`, `NoDelay` (case insensitive) | +| Default | `DelayDeniedPacket` | - "memory": "1000Mi" + - }, +#### `NFTablesMode` - "requests": { + - "cpu": "100m", +**Tab: Configuration file** - "memory": "100Mi" +| Attribute | Value | +| ----------- | ------------------------------------------------ | +| Key | `NFTablesMode` | +| Description | Configures nftables support in Felix. | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Disabled` | - } +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | ------------------------------------------------ | +| Key | `FELIX_NFTABLESMODE` | +| Description | Configures nftables support in Felix. | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Disabled` | -} -``` + -## Manager custom resource[​](#manager-custom-resource) +#### `NetlinkTimeoutSecs` -The [Manager](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#manager) CR provides a way to configure resources for ManagerDeployment. The following sections provide example configurations for this CR. + -### ManagerDeployment[​](#managerdeployment) +**Tab: Configuration file** -To configure resource specification for the [ManagerDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#managerdeployment), patch the Manager CR using the below command: +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------- | +| Key | `NetlinkTimeoutSecs` | +| Description | The timeout when talking to the kernel over the netlink protocol, used for programming routes, rules, and other kernel objects. | +| Schema | Seconds (floating point) | +| Default | `10` (10s) | -```bash -kubectl patch manager tigera-secure --type=merge --patch='{"spec": {"managerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-voltron","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-ui-apis","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-manager","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +**Tab: Environment variable** -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_NETLINKTIMEOUTSECS` | +| Description | The timeout when talking to the kernel over the netlink protocol, used for programming routes, rules, and other kernel objects. | +| Schema | Seconds (floating point) | +| Default | `10` (10s) | -#### Verification[​](#verification-21) + -You can verify the configured resources using the following command: +#### `NfNetlinkBufSize` -```bash -kubectl get deployment.apps/tigera-manager -n tigera-manager -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` + -This command will output the configured resource requests and limits for the ManagerDeployment in JSON format. +**Tab: Configuration file** -```bash -{ +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `NfNetlinkBufSize` | +| Description | Controls the size of NFLOG messages that the kernel will try to send to Felix. NFLOG messages are used to report flow verdicts from the kernel. Warning: currently increasing the value may cause errors due to a bug in the netlink library. | +| Schema | Integer | +| Default | `65536` | - "name": "tigera-ui-apis", +**Tab: Environment variable** - "resources": { +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_NFNETLINKBUFSIZE` | +| Description | Controls the size of NFLOG messages that the kernel will try to send to Felix. NFLOG messages are used to report flow verdicts from the kernel. Warning: currently increasing the value may cause errors due to a bug in the netlink library. | +| Schema | Integer | +| Default | `65536` | - "limits": { + - "cpu": "1", +#### `PolicySyncPathPrefix` - "memory": "1000Mi" + - }, +**Tab: Configuration file** - "requests": { +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------- | +| Key | `PolicySyncPathPrefix` | +| Description | Used to by Felix to communicate policy changes to external services, like Application layer policy. | +| Schema | Path to file | +| Default | none | - "cpu": "100m", +**Tab: Environment variable** - "memory": "100Mi" +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------- | +| Key | `FELIX_POLICYSYNCPATHPREFIX` | +| Description | Used to by Felix to communicate policy changes to external services, like Application layer policy. | +| Schema | Path to file | +| Default | none | - } + - } +#### `ProgramClusterRoutes` -} + -{ +**Tab: Configuration file** - "name": "tigera-voltron", +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------- | +| Key | `ProgramClusterRoutes` | +| Description | Specifies whether Felix should program IPIP routes instead of BIRD. Felix always programs VXLAN routes. | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Disabled` | - "resources": { +**Tab: Environment variable** - "limits": { +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_PROGRAMCLUSTERROUTES` | +| Description | Specifies whether Felix should program IPIP routes instead of BIRD. Felix always programs VXLAN routes. | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Disabled` | - "cpu": "1", + - "memory": "1000Mi" +#### `RemoveExternalRoutes` - }, + - "requests": { +**Tab: Configuration file** - "cpu": "100m", +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `RemoveExternalRoutes` | +| Description | Controls whether Felix will remove unexpected routes to workload interfaces. Felix will always clean up expected routes that use the configured DeviceRouteProtocol. To add your own routes, you must use a distinct protocol (in addition to setting this field to false). | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | - "memory": "100Mi" +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_REMOVEEXTERNALROUTES` | +| Description | Controls whether Felix will remove unexpected routes to workload interfaces. Felix will always clean up expected routes that use the configured DeviceRouteProtocol. To add your own routes, you must use a distinct protocol (in addition to setting this field to false). | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | - } + -} +#### `RequireMTUFile` -{ + - "name": "tigera-manager", +**Tab: Configuration file** - "resources": { +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `RequireMTUFile` | +| Description | Specifies whether mtu file is required to start the felix. Optional as to keep the same as previous behavior. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "limits": { +**Tab: Environment variable** - "cpu": "1", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_REQUIREMTUFILE` | +| Description | Specifies whether mtu file is required to start the felix. Optional as to keep the same as previous behavior. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "memory": "1000Mi" + - }, +#### `RouteRefreshInterval` - "requests": { + - "cpu": "100m", +**Tab: Configuration file** - "memory": "100Mi" +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `RouteRefreshInterval` | +| Description | The period at which Felix re-checks the routes in the dataplane to ensure that no other process has accidentally broken Calico's rules. Set to 0 to disable route refresh. | +| Schema | Seconds (floating point) | +| Default | `90` (1m30s) | - } +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_ROUTEREFRESHINTERVAL` | +| Description | The period at which Felix re-checks the routes in the dataplane to ensure that no other process has accidentally broken Calico's rules. Set to 0 to disable route refresh. | +| Schema | Seconds (floating point) | +| Default | `90` (1m30s) | -} -``` + -## Monitor custom resource[​](#monitor-custom-resource) +#### `RouteSource` -The [Monitor](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#monitor) CR provides a way to configure resources for Prometheus, Alertmanager. The following sections provide example configurations for this CR. + -### Prometheus[​](#prometheus) +**Tab: Configuration file** -To configure resource specification for the [Prometheus](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#prometheus), Resources for the default container "prometheus" can be configured using the "resources" field under "commonPrometheusFields". For all other injected containers, such as "authn-proxy", resource configuration can be set using the "containers" struct, as shown below in the patch command below. +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `RouteSource` | +| Description | Configures where Felix gets its routing information. - WorkloadIPs: use workload endpoints to construct routes. - CalicoIPAM: the default - use IPAM data to construct routes. | +| Schema | One of: `CalicoIPAM`, `WorkloadIPs` (case insensitive) | +| Default | `CalicoIPAM` | -```bash -kubectl patch monitor tigera-secure --type=merge --patch='{"spec": {"prometheus": {"spec":{ "commonPrometheusFields": {"resources": {"limits": {"cpu":"500m","memory":"500Mi"}, "requests": {"cpu":"50m", "memory":"50Mi"}}, "containers":[{"name":"authn-proxy","resources":{"limits": {"cpu":"250m","memory":"500Mi"},"requests": {"cpu":"25m","memory":"50Mi"}}}]}}}}}' -``` +**Tab: Environment variable** -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_ROUTESOURCE` | +| Description | Configures where Felix gets its routing information. - WorkloadIPs: use workload endpoints to construct routes. - CalicoIPAM: the default - use IPAM data to construct routes. | +| Schema | One of: `CalicoIPAM`, `WorkloadIPs` (case insensitive) | +| Default | `CalicoIPAM` | -#### Verification[​](#verification-22) + -You can verify the configured resources using the following command: +#### `RouteSyncDisabled` -```bash -kubectl get statefulset.apps/prometheus-calico-node-prometheus -n tigera-prometheus -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` + -This command will output the configured resource requests and limits for the Prometheus in JSON format. +**Tab: Configuration file** -> **SECONDARY:** The "config-reloader" container has default resource values set based by the Prometheus resource. +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `RouteSyncDisabled` | +| Description | Will disable all operations performed on the route table. Set to true to run in network-policy mode only. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | -```bash -{ +**Tab: Environment variable** - "name": "prometheus", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_ROUTESYNCDISABLED` | +| Description | Will disable all operations performed on the route table. Set to true to run in network-policy mode only. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "resources": { + - "limits": { +#### `RouteTableRange` - "cpu": "500m", + - "memory": "500Mi" +**Tab: Configuration file** - }, +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `RouteTableRange` | +| Description | Deprecated in favor of RouteTableRanges. Calico programs additional Linux route tables for various purposes. RouteTableRange specifies the indices of the route tables that Calico should use. | +| Schema | Range of route table indices `n-m`, where `n` and `m` are integers in \[0,250]. | +| Default | none | - "requests": { +**Tab: Environment variable** - "cpu": "50m", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_ROUTETABLERANGE` | +| Description | Deprecated in favor of RouteTableRanges. Calico programs additional Linux route tables for various purposes. RouteTableRange specifies the indices of the route tables that Calico should use. | +| Schema | Range of route table indices `n-m`, where `n` and `m` are integers in \[0,250]. | +| Default | none | - "memory": "50Mi" + - } +#### `RouteTableRanges` - } + -} +**Tab: Configuration file** -{ +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `RouteTableRanges` | +| Description | Calico programs additional Linux route tables for various purposes. RouteTableRanges specifies a set of table index ranges that Calico should use. Deprecates`RouteTableRange`, overrides `RouteTableRange`. | +| Schema | Comma or space-delimited list of route table ranges of the form `n-m` where `n` and `m` are integers in \[0,4294967295]. The sum of the sizes of all ranges may not exceed 65535. | +| Default | none | - "name": "config-reloader", +**Tab: Environment variable** - "resources": { +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_ROUTETABLERANGES` | +| Description | Calico programs additional Linux route tables for various purposes. RouteTableRanges specifies a set of table index ranges that Calico should use. Deprecates`RouteTableRange`, overrides `RouteTableRange`. | +| Schema | Comma or space-delimited list of route table ranges of the form `n-m` where `n` and `m` are integers in \[0,4294967295]. The sum of the sizes of all ranges may not exceed 65535. | +| Default | none | - "limits": { + - "cpu": "10m", +#### `ServiceLoopPrevention` - "memory": "50Mi" + - }, +**Tab: Configuration file** - "requests": { +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `ServiceLoopPrevention` | +| Description | When service IP advertisement is enabled, prevent routing loops to service IPs that are not in use, by dropping or rejecting packets that do not get DNAT'd by kube-proxy. Unless set to "Disabled", in which case such routing loops continue to be allowed. | +| Schema | One of: `Disabled`, `Drop`, `Reject` (case insensitive) | +| Default | `Drop` | - "cpu": "10m", +**Tab: Environment variable** - "memory": "50Mi" +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_SERVICELOOPPREVENTION` | +| Description | When service IP advertisement is enabled, prevent routing loops to service IPs that are not in use, by dropping or rejecting packets that do not get DNAT'd by kube-proxy. Unless set to "Disabled", in which case such routing loops continue to be allowed. | +| Schema | One of: `Disabled`, `Drop`, `Reject` (case insensitive) | +| Default | `Drop` | - } + - } +#### `SidecarAccelerationEnabled` -} + -{ +**Tab: Configuration file** - "name": "authn-proxy", +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `SidecarAccelerationEnabled` | +| Description | Enables experimental sidecar acceleration. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "resources": { +**Tab: Environment variable** - "limits": { +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_SIDECARACCELERATIONENABLED` | +| Description | Enables experimental sidecar acceleration. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "cpu": "250m", + - "memory": "500Mi" +#### `UseInternalDataplaneDriver` - }, + - "requests": { +**Tab: Configuration file** - "cpu": "25m", +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `UseInternalDataplaneDriver` | +| Description | If true, Felix will use its internal dataplane programming logic. If false, it will launch an external dataplane driver and communicate with it over protobuf. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | - "memory": "50Mi" +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_USEINTERNALDATAPLANEDRIVER` | +| Description | If true, Felix will use its internal dataplane programming logic. If false, it will launch an external dataplane driver and communicate with it over protobuf. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | - } + -} -``` +#### `WAFEventLogsFileDirectory` -### Alertmanager[​](#alertmanager) + -To configure resource specification for the [Alertmanager](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#alertmanager), you can set resources for the default container "prometheus" using the "resources" field under "commonPrometheusFields". For all other injected containers, like "authn-proxy", resource configuration can be set using the "containers" struct, as shown below in the patch command below. +**Tab: Configuration file** -```bash -kubectl patch monitor tigera-secure --type=merge --patch='{"spec": {"alertManager": {"spec": {"resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}}}}' -``` +| Attribute | Value | +| ----------- | ------------------------------------------------------- | +| Key | `WAFEventLogsFileDirectory` | +| Description | Sets the directory where WAFEvent log files are stored. | +| Schema | String | +| Default | `/var/log/calico/waf` | -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +**Tab: Environment variable** -#### Verification[​](#verification-23) +| Attribute | Value | +| ----------- | ------------------------------------------------------- | +| Key | `FELIX_WAFEVENTLOGSFILEDIRECTORY` | +| Description | Sets the directory where WAFEvent log files are stored. | +| Schema | String | +| Default | `/var/log/calico/waf` | -You can verify the configured resources using the following command: + -```bash -kubectl get statefulset.apps/alertmanager-calico-node-alertmanager -n tigera-prometheus -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +#### `WAFEventLogsFileEnabled` -This command will output the configured resource requests and limits for the Alertmanager in JSON format. + -> **SECONDARY:** The "config-reloader" container has default resource values set by the Alertmanager resource. +**Tab: Configuration file** -```bash -{ +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `WAFEventLogsFileEnabled` | +| Description | Controls logging WAFEvent logs to a file. If false no WAFEvent logging to file will occur. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "name": "alertmanager", +**Tab: Environment variable** - "resources": { +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_WAFEVENTLOGSFILEENABLED` | +| Description | Controls logging WAFEvent logs to a file. If false no WAFEvent logging to file will occur. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | - "limits": { + - "cpu": "1", +#### `WAFEventLogsFileMaxFileSizeMB` - "memory": "1000Mi" + - }, +**Tab: Configuration file** - "requests": { +| Attribute | Value | +| ----------- | -------------------------------------------------------------- | +| Key | `WAFEventLogsFileMaxFileSizeMB` | +| Description | Sets the max size in MB of WAFEvent log files before rotation. | +| Schema | Integer | +| Default | `100` | - "cpu": "100m", +**Tab: Environment variable** - "memory": "100Mi" +| Attribute | Value | +| ----------- | -------------------------------------------------------------- | +| Key | `FELIX_WAFEVENTLOGSFILEMAXFILESIZEMB` | +| Description | Sets the max size in MB of WAFEvent log files before rotation. | +| Schema | Integer | +| Default | `100` | - } + - } +#### `WAFEventLogsFileMaxFiles` -} + -{ +**Tab: Configuration file** - "name": "config-reloader", +| Attribute | Value | +| ----------- | ---------------------------------------------- | +| Key | `WAFEventLogsFileMaxFiles` | +| Description | Sets the number of WAFEvent log files to keep. | +| Schema | Integer | +| Default | `5` | - "resources": { +**Tab: Environment variable** - "limits": { +| Attribute | Value | +| ----------- | ---------------------------------------------- | +| Key | `FELIX_WAFEVENTLOGSFILEMAXFILES` | +| Description | Sets the number of WAFEvent log files to keep. | +| Schema | Integer | +| Default | `5` | - "cpu": "10m", + - "memory": "50Mi" +#### `WAFEventLogsFlushInterval` - }, + - "requests": { +**Tab: Configuration file** - "cpu": "10m", +| Attribute | Value | +| ----------- | ------------------------------------------------------------- | +| Key | `WAFEventLogsFlushInterval` | +| Description | Configures the interval at which Felix exports WAFEvent logs. | +| Schema | Seconds (floating point) | +| Default | `15` (15s) | - "memory": "50Mi" +**Tab: Environment variable** - } +| Attribute | Value | +| ----------- | ------------------------------------------------------------- | +| Key | `FELIX_WAFEVENTLOGSFLUSHINTERVAL` | +| Description | Configures the interval at which Felix exports WAFEvent logs. | +| Schema | Seconds (floating point) | +| Default | `15` (15s) | - } + -} -``` +#### `WorkloadSourceSpoofing` -## PacketCaptureAPI custom resource[​](#packetcaptureapi-custom-resource) + -The [PacketCaptureAPI](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#packetcaptureapi) CR provides a way to configure resources for PacketCapture. The following sections provide example configurations for this CR. +**Tab: Configuration file** -### PacketCaptureAPIDeployment[​](#packetcaptureapideployment) +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `WorkloadSourceSpoofing` | +| Description | Controls whether pods can use the allowedSourcePrefixes annotation to send traffic with a source IP address that is not theirs. This is disabled by default. When set to "Any", pods can request any prefix. | +| Schema | One of: `Any`, `Disabled` (case insensitive) | +| Default | `Disabled` | -To configure resource specification for the [PacketCaptureAPI](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#packetcaptureapi), patch the PacketCapture CR using the below command: +**Tab: Environment variable** -```bash -kubectl patch packetcaptureapis tigera-secure --type=merge --patch='{"spec": {"packetCaptureAPIDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-packetcapture-server","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}' -``` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_WORKLOADSOURCESPOOFING` | +| Description | Controls whether pods can use the allowedSourcePrefixes annotation to send traffic with a source IP address that is not theirs. This is disabled by default. When set to "Any", pods can request any prefix. | +| Schema | One of: `Any`, `Disabled` (case insensitive) | +| Default | `Disabled` | -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). + -#### Verification[​](#verification-24) +### Data plane: iptables[​](#data-plane-iptables) -You can verify the configured resources using the following command: +#### `IpsetsRefreshInterval` -```bash -kubectl get deployment.apps/tigera-packetcapture -n tigera-packetcapture -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` + -This command will output the configured resource requests and limits for the PacketCaptureDeployment in JSON format. +**Tab: Configuration file** -```bash -{ +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------- | +| Key | `IpsetsRefreshInterval` | +| Description | Controls the period at which Felix re-checks all IP sets to look for discrepancies. Set to 0 to disable the periodic refresh. | +| Schema | Seconds (floating point) | +| Default | `90` (1m30s) | - "name": "tigera-packetcapture-server", +**Tab: Environment variable** - "resources": { +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_IPSETSREFRESHINTERVAL` | +| Description | Controls the period at which Felix re-checks all IP sets to look for discrepancies. Set to 0 to disable the periodic refresh. | +| Schema | Seconds (floating point) | +| Default | `90` (1m30s) | - "limits": { + - "cpu": "1", +#### `IptablesBackend` - "memory": "1000Mi" + - }, +**Tab: Configuration file** - "requests": { +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `IptablesBackend` | +| Description | Controls which backend of iptables will be used. The default is `Auto`.Warning: changing this on a running system can leave "orphaned" rules in the "other" backend. These should be cleaned up to avoid confusing interactions. | +| Schema | One of: `auto`, `legacy`, `nft` (case insensitive) | +| Default | `auto` | - "cpu": "100m", +**Tab: Environment variable** - "memory": "100Mi" +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_IPTABLESBACKEND` | +| Description | Controls which backend of iptables will be used. The default is `Auto`.Warning: changing this on a running system can leave "orphaned" rules in the "other" backend. These should be cleaned up to avoid confusing interactions. | +| Schema | One of: `auto`, `legacy`, `nft` (case insensitive) | +| Default | `auto` | - } + - } +#### `IptablesFilterAllowAction` -} -``` + -## PolicyRecommendation custom resource[​](#policyrecommendation-custom-resource) +**Tab: Configuration file** -The [PolicyRecommendation](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#policyrecommendation) CR provides a way to configure resources for PolicyRecommendation. The following sections provide example configurations for this CR. +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `IptablesFilterAllowAction` | +| Description | Controls what happens to traffic that is accepted by a Felix policy chain in the iptables filter table (which is used for "normal" policy). The default will immediately `Accept` the traffic. Use `Return` to send the traffic back up to the system chains for further processing. | +| Schema | One of: `ACCEPT`, `RETURN` (case insensitive) | +| Default | `ACCEPT` | -### PolicyRecommendationDeployment[​](#policyrecommendationdeployment) +**Tab: Environment variable** -To configure resource specification for the [PolicyRecommendationDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#policyrecommendationdeployment), patch the PolicyRecommendation CR using the below command: +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_IPTABLESFILTERALLOWACTION` | +| Description | Controls what happens to traffic that is accepted by a Felix policy chain in the iptables filter table (which is used for "normal" policy). The default will immediately `Accept` the traffic. Use `Return` to send the traffic back up to the system chains for further processing. | +| Schema | One of: `ACCEPT`, `RETURN` (case insensitive) | +| Default | `ACCEPT` | -```bash -kubectl patch policyrecommendation tigera-secure --type=merge --patch='{"spec": {"policyRecommendationDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"policy-recommendation-controller","resources":{"requests":{"cpu":"100m", "memory":"100Mi"},"limits":{"cpu":"1", "memory":"512Mi"}}}]}}}}}}' -``` + -This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB). +#### `IptablesFilterDenyAction` -#### Verification[​](#verification-25) + -You can verify the configured resources using the following command: +**Tab: Configuration file** -```bash -kubectl get deployment.apps/tigera-policy-recommendation -n calico-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `IptablesFilterDenyAction` | +| Description | Controls what happens to traffic that is denied by network policy. By default Calico blocks traffic with an iptables "DROP" action. If you want to use "REJECT" action instead you can configure it in here. | +| Schema | One of: `DROP`, `REJECT` (case insensitive) | +| Default | `DROP` | -This command will output the configured resource requests and limits for the PolicyRecommendationDeployment in JSON format. +**Tab: Environment variable** -```bash -{ +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_IPTABLESFILTERDENYACTION` | +| Description | Controls what happens to traffic that is denied by network policy. By default Calico blocks traffic with an iptables "DROP" action. If you want to use "REJECT" action instead you can configure it in here. | +| Schema | One of: `DROP`, `REJECT` (case insensitive) | +| Default | `DROP` | - "name": "policy-recommendation-controller", + - "resources": { +#### `IptablesLockFilePath` - "limits": { + - "cpu": "1", +**Tab: Configuration file** - "memory": "512Mi" +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `IptablesLockFilePath` | +| Description | The location of the iptables lock file. You may need to change this if the lock file is not in its standard location (for example if you have mapped it into Felix's container at a different path). | +| Schema | Path to file | +| Default | `/run/xtables.lock` | - }, +**Tab: Environment variable** - "requests": { +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_IPTABLESLOCKFILEPATH` | +| Description | The location of the iptables lock file. You may need to change this if the lock file is not in its standard location (for example if you have mapped it into Felix's container at a different path). | +| Schema | Path to file | +| Default | `/run/xtables.lock` | - "cpu": "100m", + - "memory": "100Mi" +#### `IptablesLockProbeIntervalMillis` - } + - } +**Tab: Configuration file** -} -``` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `IptablesLockProbeIntervalMillis` | +| Description | When IptablesLockTimeout is enabled: the time that Felix will wait between attempts to acquire the iptables lock if it is not available. Lower values make Felix more responsive when the lock is contended, but use more CPU. | +| Schema | Milliseconds (floating point) | +| Default | `50` (50ms) | -## Update via Helm[​](#update-via-helm) +**Tab: Environment variable** -To update configurations during installation via the [Helm chart](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/helm#install-calico-enterprise), modify the values.yaml with the necessary resource values for the components prior to executing the Helm install. +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_IPTABLESLOCKPROBEINTERVALMILLIS` | +| Description | When IptablesLockTimeout is enabled: the time that Felix will wait between attempts to acquire the iptables lock if it is not available. Lower values make Felix more responsive when the lock is contended, but use more CPU. | +| Schema | Milliseconds (floating point) | +| Default | `50` (50ms) | -> **SECONDARY:** The provided example illustrates configuring the apiserver component. Follow a similar approach for other components to update resource requests and limits during installation using the Helm chart. + -### APIServer custom resource[​](#apiserver-custom-resource-1) +#### `IptablesLockTimeoutSecs` -The [APIServer](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#apiserver) CR provides a way to configure APIServerDeployment. The following sections provide example values.yaml for apiserver component. + -#### APIServerDeployment[​](#apiserverdeployment-1) +**Tab: Configuration file** -To configure resource specification for the [APIServerDeployment](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api#apiserverdeployment), update values.yaml with the appropriate resource values. +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `IptablesLockTimeoutSecs` | +| Description | The time that Felix itself will wait for the iptables lock (rather than delegating the lock handling to the `iptables` command).Deprecated: `iptables-restore` v1.8+ always takes the lock, so enabling this feature results in deadlock. | +| Schema | Seconds (floating point) | +| Default | `0` (0s) | -```bash -apiServer: +**Tab: Environment variable** - apiServerDeployment: +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_IPTABLESLOCKTIMEOUTSECS` | +| Description | The time that Felix itself will wait for the iptables lock (rather than delegating the lock handling to the `iptables` command).Deprecated: `iptables-restore` v1.8+ always takes the lock, so enabling this feature results in deadlock. | +| Schema | Seconds (floating point) | +| Default | `0` (0s) | - spec: + - template: +#### `IptablesMangleAllowAction` - spec: + - containers: +**Tab: Configuration file** - - name: calico-apiserver +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `IptablesMangleAllowAction` | +| Description | Controls what happens to traffic that is accepted by a Felix policy chain in the iptables mangle table (which is used for "pre-DNAT" policy). The default will immediately `Accept` the traffic. Use `Return` to send the traffic back up to the system chains for further processing. | +| Schema | One of: `ACCEPT`, `RETURN` (case insensitive) | +| Default | `ACCEPT` | - resources: +**Tab: Environment variable** - limits: +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_IPTABLESMANGLEALLOWACTION` | +| Description | Controls what happens to traffic that is accepted by a Felix policy chain in the iptables mangle table (which is used for "pre-DNAT" policy). The default will immediately `Accept` the traffic. Use `Return` to send the traffic back up to the system chains for further processing. | +| Schema | One of: `ACCEPT`, `RETURN` (case insensitive) | +| Default | `ACCEPT` | - cpu: 1 + - memory: 1000Mi +#### `IptablesMarkMask` - requests: + - cpu: 100m +**Tab: Configuration file** - memory: 100Mi -``` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `IptablesMarkMask` | +| Description | The mask that Felix selects its IPTables Mark bits from. Should be a 32 bit hexadecimal number with at least 8 bits set, none of which clash with any other mark bits in use on the system. | +| Schema | 32-bit bitmask (hex or deccimal allowed) with at least 2 bits set, example: `0xffff0000` | +| Default | `0xffff0000` | -You can verify the configured resources using the following command: +**Tab: Environment variable** -```bash -kubectl get deployment.apps/calico-apiserver -n calico-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}' -``` +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_IPTABLESMARKMASK` | +| Description | The mask that Felix selects its IPTables Mark bits from. Should be a 32 bit hexadecimal number with at least 8 bits set, none of which clash with any other mark bits in use on the system. | +| Schema | 32-bit bitmask (hex or deccimal allowed) with at least 2 bits set, example: `0xffff0000` | +| Default | `0xffff0000` | -### kube-controllers + - +#### `IptablesNATOutgoingInterfaceFilter` -## [📄️Configuring the Calico Enterprise Kubernetes controllers](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/configuration) + -[Calico Enterprise Kubernetes controllers monitor the Kubernetes API and perform actions based on cluster state.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/configuration) +**Tab: Configuration file** -## [📄️Monitoring kube-controllers with Prometheus](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/prometheus) +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `IptablesNATOutgoingInterfaceFilter` | +| Description | This parameter can be used to limit the host interfaces on which Calico will apply SNAT to traffic leaving a Calico IPAM pool with "NAT outgoing" enabled. This can be useful if you have a main data interface, where traffic should be SNATted and a secondary device (such as the docker bridge) which is local to the host and doesn't require SNAT. This parameter uses the iptables interface matching syntax, which allows + as a wildcard. Most users will not need to set this. Example: if your data interfaces are eth0 and eth1 and you want to exclude the docker bridge, you could set this to eth+. | +| Schema | String matching regex `^[a-zA-Z0-9:._+-]{1,15}$` | +| Default | none | -[Review metrics for the kube-controllers component if you are using Prometheus.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/kube-controllers/prometheus) +**Tab: Environment variable** -### Configuring the Calico Enterprise Kubernetes controllers +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_IPTABLESNATOUTGOINGINTERFACEFILTER` | +| Description | This parameter can be used to limit the host interfaces on which Calico will apply SNAT to traffic leaving a Calico IPAM pool with "NAT outgoing" enabled. This can be useful if you have a main data interface, where traffic should be SNATted and a secondary device (such as the docker bridge) which is local to the host and doesn't require SNAT. This parameter uses the iptables interface matching syntax, which allows + as a wildcard. Most users will not need to set this. Example: if your data interfaces are eth0 and eth1 and you want to exclude the docker bridge, you could set this to eth+. | +| Schema | String matching regex `^[a-zA-Z0-9:._+-]{1,15}$` | +| Default | none | - + -The Calico Enterprise Kubernetes controllers are deployed in a Kubernetes cluster. The different controllers monitor the Kubernetes API and perform actions based on cluster state. +#### `IptablesPostWriteCheckIntervalSecs` -**Tab: Operator** - -If you have installed Calico using the operator, see the [KubeControllersConfiguration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/kubecontrollersconfig) resource instead. +**Tab: Configuration file** -**Tab: Manifest** +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `IptablesPostWriteCheckIntervalSecs` | +| Description | The period after Felix has done a write to the dataplane that it schedules an extra read back in order to check the write was not clobbered by another process. This should only occur if another application on the system doesn't respect the iptables lock. | +| Schema | Seconds (floating point) | +| Default | `5` (5s) | -The controllers are primarily configured through environment variables. When running the controllers as a Kubernetes pod, this is accomplished through the pod manifest `env` section. +**Tab: Environment variable** -## The tigera/kube-controllers container[​](#the-tigerakube-controllers-container) +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_IPTABLESPOSTWRITECHECKINTERVALSECS` | +| Description | The period after Felix has done a write to the dataplane that it schedules an extra read back in order to check the write was not clobbered by another process. This should only occur if another application on the system doesn't respect the iptables lock. | +| Schema | Seconds (floating point) | +| Default | `5` (5s) | -The `tigera/kube-controllers` container includes the following controllers: + -1. node controller: watches for the removal of Kubernetes nodes and removes corresponding data from Calico Enterprise, and optionally watches for node updates to create and sync host endpoints for each node. -2. federation controller: watches Kubernetes services and endpoints locally and across all remote clusters, and programs Kubernetes endpoints for any locally configured service that specifies a service federation selector annotation. +#### `IptablesRefreshInterval` -### Configuring datastore access[​](#configuring-datastore-access) + -The datastore type can be configured via the `DATASTORE_TYPE` environment variable. Only supported value is `kubernetes`. +**Tab: Configuration file** -#### kubernetes[​](#kubernetes) +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `IptablesRefreshInterval` | +| Description | The period at which Felix re-checks the IP sets in the dataplane to ensure that no other process has accidentally broken Calico's rules. Set to 0 to disable IP sets refresh. Note: the default for this value is lower than the other refresh intervals as a workaround for a Linux kernel bug that was fixed in kernel version 4.11. If you are using v4.11 or greater you may want to set this to, a higher value to reduce Felix CPU usage. | +| Schema | Seconds (floating point) | +| Default | `180` (3m0s) | -When running the controllers as a Kubernetes pod, Kubernetes API access is [configured automatically](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod) and no additional configuration is required. However, the controllers can also be configured to use an explicit [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file override to configure API access if needed. +**Tab: Environment variable** -| Environment | Description | Schema | -| ------------ | ------------------------------------------------------------------ | ------ | -| `KUBECONFIG` | Path to a Kubernetes kubeconfig file mounted within the container. | path | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_IPTABLESREFRESHINTERVAL` | +| Description | The period at which Felix re-checks the IP sets in the dataplane to ensure that no other process has accidentally broken Calico's rules. Set to 0 to disable IP sets refresh. Note: the default for this value is lower than the other refresh intervals as a workaround for a Linux kernel bug that was fixed in kernel version 4.11. If you are using v4.11 or greater you may want to set this to, a higher value to reduce Felix CPU usage. | +| Schema | Seconds (floating point) | +| Default | `180` (3m0s) | -### Other configuration[​](#other-configuration) + -> **SECONDARY:** Whenever possible, prefer configuring the kube-controllers component using the [KubeControllersConfiguration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/kubecontrollersconfig) API resource, Some configuration options may not be available through environment variables. +#### `KubeNodePortRanges` -The following environment variables can be used to configure the Calico Enterprise Kubernetes controllers. + -| Environment | Description | Schema | Default | -| --------------------- | --------------------------------------------------------------------------- | --------------------------------------------------------- | ----------------------------------------------------- | -| `DATASTORE_TYPE` | Which datastore type to use | etcdv3, kubernetes | kubernetes | -| `ENABLED_CONTROLLERS` | Which controllers to run | namespace, node, policy, serviceaccount, workloadendpoint | policy,namespace,serviceaccount,workloadendpoint,node | -| `LOG_LEVEL` | Minimum log level to be displayed. | debug, info, warning, error | info | -| `KUBECONFIG` | Path to a kubeconfig file for Kubernetes API access | path | | -| `SYNC_NODE_LABELS` | When enabled, Kubernetes node labels will be copied to Calico node objects. | boolean | true | -| `AUTO_HOST_ENDPOINTS` | When set to enabled, automatically create a host endpoint for each node. | enabled, disabled | disabled | +**Tab: Configuration file** -## About each controller[​](#about-each-controller) +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `KubeNodePortRanges` | +| Description | Holds list of port ranges used for service node ports. Only used if felix detects kube-proxy running in ipvs mode. Felix uses these ranges to separate host and workload traffic. . | +| Schema | List of port ranges: comma-delimited list of either single numbers in range \[0,65535] or a ranges of numbers `n:m` | +| Default | `30000:32767` | -### Node controller[​](#node-controller) +**Tab: Environment variable** -The node controller has several functions. +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_KUBENODEPORTRANGES` | +| Description | Holds list of port ranges used for service node ports. Only used if felix detects kube-proxy running in ipvs mode. Felix uses these ranges to separate host and workload traffic. . | +| Schema | List of port ranges: comma-delimited list of either single numbers in range \[0,65535] or a ranges of numbers `n:m` | +| Default | `30000:32767` | -- Garbage collects IP addresses. -- Automatically provisions host endpoints for Kubernetes nodes. + -### Federation controller[​](#federation-controller) +#### `MaxIpsetSize` -The federation controller syncs Kubernetes federated endpoint changes to the Calico Enterprise datastore. The controller must have read access to the Kubernetes API to monitor `Service` and `Endpoints` events, and must also have write access to update `Endpoints`. + -The federation controller is disabled by default if `ENABLED_CONTROLLERS` is not explicitly specified. +**Tab: Configuration file** -This controller is valid for all Calico Enterprise datastore types. For more details refer to the [Configuring federated services](https://docs.tigera.io/calico-enterprise/latest/multicluster/federation/services-controller) usage guide. +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------- | +| Key | `MaxIpsetSize` | +| Description | The maximum number of IP addresses that can be stored in an IP set. Not applicable if using the nftables backend. | +| Schema | Integer | +| Default | `1048576` | - +**Tab: Environment variable** -### Monitoring kube-controllers with Prometheus +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_MAXIPSETSIZE` | +| Description | The maximum number of IP addresses that can be stored in an IP set. Not applicable if using the nftables backend. | +| Schema | Integer | +| Default | `1048576` | -kube-controllers can be configured to report a number of metrics through Prometheus. This reporting is enabled by default on port 9094. See the [configuration reference](https://docs.tigera.io/calico-enterprise/latest/reference/resources/kubecontrollersconfig) for how to change metrics reporting configuration (or disable it completely). + -## Metric reference[​](#metric-reference) +### Data plane: nftables[​](#data-plane-nftables) -#### kube-controllers specific[​](#kube-controllers-specific) +#### `NftablesFilterAllowAction` -kube-controllers exports a number of Prometheus metrics. The current set is as follows. Since some metrics may be tied to particular implementation choices inside kube-controllers we can't make any hard guarantees that metrics will persist across releases. However, we aim not to make any spurious changes to existing metrics. + -| Metric Name | Labels | Description | -| ------------------------------------ | --------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `ipam_allocations_in_use` | ippool, node | Number of Calico IP allocations currently in use by a workload or interface. | -| `ipam_allocations_borrowed` | ippool, node | Number of Calico IP allocations currently in use where the allocation was borrowed from a block affine to another node. | -| `ipam_allocations_gc_candidates` | ippool, node | Number of Calico IP allocations currently marked by the GC as potential leaks. This metric returns to zero under normal GC operation. | -| `ipam_allocations_gc_reclamations` | ippool, node | Count of Calico IP allocations that have been reclaimed by the GC. Increase of this counter corresponds with a decrease of the candidates gauge under normal operation. | -| `ipam_blocks` | ippool, node | Number of IPAM blocks. | -| `ipam_ippool_size` | ippool | Number of IP addresses in the IP Pool CIDR. | -| `ipam_blocks_per_node` | node | Number of IPAM blocks, indexed by the node to which they have affinity. Prefer `ipam_blocks` for new integrations. | -| `ipam_allocations_per_node` | node | Number of Calico IP allocations, indexed by node on which the allocation was made. Prefer `ipam_allocations_in_use` for new integrations. | -| `ipam_allocations_borrowed_per_node` | node | Number of Calico IP allocations borrowed from a non-affine block, indexed by node on which the allocation was made. Prefer `ipam_allocations_borrowed` for new integrations. | -| `remote_cluster_connection_status` | remote\_cluster\_name | Status of the remote cluster connection in federation. Represented as numeric values 0 (NotConnecting) ,1 (Connecting), 2 (InSync), 3 (ReSyncInProgress), 4 (ConfigChangeRestartRequired), 5 (ConfigInComplete). | +**Tab: Configuration file** -Labels can be interpreted as follows: +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `NftablesFilterAllowAction` | +| Description | Controls the nftables action that Felix uses to represent the "allow" policy verdict in the filter table. The default is to `ACCEPT` the traffic, which is a terminal action. Alternatively, `RETURN` can be used to return the traffic back to the top-level chain for further processing by your rules. | +| Schema | One of: `ACCEPT`, `RETURN` (case insensitive) | +| Default | `ACCEPT` | -| Label Name | Description | -| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `node` | For allocation metrics, the node on which the allocation was made. For block metrics, the node for which the block has affinity. If the block has no affinity, value will be `no_affinity`. | -| `ippool` | The IP Pool that the IPAM block occupies. If there is no IP Pool which matches the block, value will be `no_ippool`. | -| `remote_cluster_name` | Name of the remote cluster in federation. | +**Tab: Environment variable** -Prometheus metrics are self-documenting, with metrics turned on, `curl` can be used to list the metrics along with their help text and type information. +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_NFTABLESFILTERALLOWACTION` | +| Description | Controls the nftables action that Felix uses to represent the "allow" policy verdict in the filter table. The default is to `ACCEPT` the traffic, which is a terminal action. Alternatively, `RETURN` can be used to return the traffic back to the top-level chain for further processing by your rules. | +| Schema | One of: `ACCEPT`, `RETURN` (case insensitive) | +| Default | `ACCEPT` | -```bash -curl -s http://localhost:9094/metrics | head -``` + -#### CPU / memory metrics[​](#cpu--memory-metrics) +#### `NftablesFilterDenyAction` -kube-controllers also exports the default set of metrics that Prometheus makes available. Currently, those include: + -| Name | Description | -| -------------------------------------------- | ------------------------------------------------------------------ | -| `go_gc_duration_seconds` | A summary of the GC invocation durations. | -| `go_goroutines` | Number of goroutines that currently exist. | -| `go_memstats_alloc_bytes` | Number of bytes allocated and still in use. | -| `go_memstats_alloc_bytes_total` | Total number of bytes allocated, even if freed. | -| `go_memstats_buck_hash_sys_bytes` | Number of bytes used by the profiling bucket hash table. | -| `go_memstats_frees_total` | Total number of frees. | -| `go_memstats_gc_sys_bytes` | Number of bytes used for garbage collection system metadata. | -| `go_memstats_heap_alloc_bytes` | Number of heap bytes allocated and still in use. | -| `go_memstats_heap_idle_bytes` | Number of heap bytes waiting to be used. | -| `go_memstats_heap_inuse_bytes` | Number of heap bytes that are in use. | -| `go_memstats_heap_objects` | Number of allocated objects. | -| `go_memstats_heap_released_bytes_total` | Total number of heap bytes released to OS. | -| `go_memstats_heap_sys_bytes` | Number of heap bytes obtained from system. | -| `go_memstats_last_gc_time_seconds` | Number of seconds since 1970 of last garbage collection. | -| `go_memstats_lookups_total` | Total number of pointer lookups. | -| `go_memstats_mallocs_total` | Total number of mallocs. | -| `go_memstats_mcache_inuse_bytes` | Number of bytes in use by mcache structures. | -| `go_memstats_mcache_sys_bytes` | Number of bytes used for mcache structures obtained from system. | -| `go_memstats_mspan_inuse_bytes` | Number of bytes in use by mspan structures. | -| `go_memstats_mspan_sys_bytes` | Number of bytes used for mspan structures obtained from system. | -| `go_memstats_next_gc_bytes` | Number of heap bytes when next garbage collection will take place. | -| `go_memstats_other_sys_bytes` | Number of bytes used for other system allocations. | -| `go_memstats_stack_inuse_bytes` | Number of bytes in use by the stack allocator. | -| `go_memstats_stack_sys_bytes` | Number of bytes obtained from system for stack allocator. | -| `go_memstats_sys_bytes` | Number of bytes obtained by system. Sum of all system allocations. | -| `process_cpu_seconds_total` | Total user and system CPU time spent in seconds. | -| `process_max_fds` | Maximum number of open file descriptors. | -| `process_open_fds` | Number of open file descriptors. | -| `process_resident_memory_bytes` | Resident memory size in bytes. | -| `process_start_time_seconds` | Start time of the process since Unix epoch in seconds. | -| `process_virtual_memory_bytes` | Virtual memory size in bytes. | -| `promhttp_metric_handler_requests_in_flight` | Current number of scrapes being served. | -| `promhttp_metric_handler_requests_total` | Total number of scrapes by HTTP status code. | +**Tab: Configuration file** -### Calico Enterprise node (node) +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `NftablesFilterDenyAction` | +| Description | Controls what happens to traffic that is denied by network policy. By default, Calico blocks traffic with a "drop" action. If you want to use a "reject" action instead you can configure it here. | +| Schema | One of: `DROP`, `REJECT` (case insensitive) | +| Default | `DROP` | - +**Tab: Environment variable** -## [📄️Configuring node](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/configuration) +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_NFTABLESFILTERDENYACTION` | +| Description | Controls what happens to traffic that is denied by network policy. By default, Calico blocks traffic with a "drop" action. If you want to use a "reject" action instead you can configure it here. | +| Schema | One of: `DROP`, `REJECT` (case insensitive) | +| Default | `DROP` | -[Customize node using environment variables.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/configuration) + -## [🗃Felix](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/) +#### `NftablesMangleAllowAction` -[2 items](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/) + -### Configuring node +**Tab: Configuration file** - +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `NftablesMangleAllowAction` | +| Description | Controls the nftables action that Felix uses to represent the "allow" policy verdict in the mangle table. The default is to `ACCEPT` the traffic, which is a terminal action. Alternatively, `RETURN` can be used to return the traffic back to the top-level chain for further processing by your rules. | +| Schema | One of: `ACCEPT`, `RETURN` (case insensitive) | +| Default | `ACCEPT` | -The `node` container is deployed to every node (on Kubernetes, by a DaemonSet), and runs three internal daemons: +**Tab: Environment variable** -- Felix, the Calico daemon that runs on every node and provides endpoints. -- BIRD, the BGP daemon that distributes routing information to other nodes. -- confd, a daemon that watches the Calico datastore for config changes and updates BIRD’s config files. +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_NFTABLESMANGLEALLOWACTION` | +| Description | Controls the nftables action that Felix uses to represent the "allow" policy verdict in the mangle table. The default is to `ACCEPT` the traffic, which is a terminal action. Alternatively, `RETURN` can be used to return the traffic back to the top-level chain for further processing by your rules. | +| Schema | One of: `ACCEPT`, `RETURN` (case insensitive) | +| Default | `ACCEPT` | -For manifest-based installations, `node` is primarily configured through environment variables, typically set in the deployment manifest. Individual nodes may also be updated through the Node custom resource. `node` can also be configured through the Calico Operator. + -The rest of this page lists the available configuration options, and is followed by specific considerations for various settings. +#### `NftablesMarkMask` -**Tab: Operator** +**Tab: Configuration file** -`node` does not need to be configured directly when installed by the operator. For a complete operator configuration reference, see [the installation API reference documentation](https://docs.tigera.io/calico-enterprise/latest/reference/installation/api). +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `NftablesMarkMask` | +| Description | The mask that Felix selects its nftables Mark bits from. Should be a 32 bit hexadecimal number with at least 8 bits set, none of which clash with any other mark bits in use on the system. | +| Schema | 32-bit bitmask (hex or deccimal allowed) with at least 2 bits set, example: `0xffff0000` | +| Default | `0xffff0000` | -**Tab: Manifest** +**Tab: Environment variable** -## Environment variables[​](#environment-variables) +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_NFTABLESMARKMASK` | +| Description | The mask that Felix selects its nftables Mark bits from. Should be a 32 bit hexadecimal number with at least 8 bits set, none of which clash with any other mark bits in use on the system. | +| Schema | 32-bit bitmask (hex or deccimal allowed) with at least 2 bits set, example: `0xffff0000` | +| Default | `0xffff0000` | -### Configuring the default IP pool(s)[​](#configuring-the-default-ip-pools) + -Calico uses IP pools to configure how addresses are allocated to pods, and how networking works for certain sets of addresses. You can see the full schema for IP pools here. +#### `NftablesRefreshInterval` -`node` can be configured to create a default IP pool for you, but only if none already exist in the cluster. The following options control the parameters on the created pool. + -| Environment | Description | Schema | -| -------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | -| CALICO\_IPV4POOL\_CIDR | The IPv4 Pool to create if none exists at start up. It is invalid to define this variable and NO\_DEFAULT\_POOLS. \[Default: First not used in locally of (192.168.0.0/16, 172.16.0.0/16, .., 172.31.0.0/16) ] | IPv4 CIDR | -| CALICO\_IPV4POOL\_BLOCK\_SIZE | Block size to use for the IPv4 Pool created at startup. Block size for IPv4 should be in the range 20-32 (inclusive) \[Default: `26`] | int | -| CALICO\_IPV4POOL\_IPIP | IPIP Mode to use for the IPv4 Pool created at start up. If set to a value other than `Never`, `CALICO_IPV4POOL_VXLAN` should not be set. \[Default: `Always`] | Always, CrossSubnet, Never ("Off" is also accepted as a synonym for "Never") | -| CALICO\_IPV4POOL\_VXLAN | VXLAN Mode to use for the IPv4 Pool created at start up. If set to a value other than `Never`, `CALICO_IPV4POOL_IPIP` should not be set. \[Default: `Never`] | Always, CrossSubnet, Never | -| CALICO\_IPV4POOL\_NAT\_OUTGOING | Controls NAT Outgoing for the IPv4 Pool created at start up. \[Default: `true`] | boolean | -| CALICO\_IPV4POOL\_NODE\_SELECTOR | Controls the NodeSelector for the IPv4 Pool created at start up. \[Default: `all()`] | [selector](https://docs.tigera.io/calico-enterprise/latest/reference/resources/ippool#node-selector) | -| CALICO\_IPV6POOL\_CIDR | The IPv6 Pool to create if none exists at start up. It is invalid to define this variable and NO\_DEFAULT\_POOLS. \[Default: ``] | IPv6 CIDR | -| CALICO\_IPV6POOL\_BLOCK\_SIZE | Block size to use for the IPv6 POOL created at startup. Block size for IPv6 should be in the range 116-128 (inclusive) \[Default: `122`] | int | -| CALICO\_IPV6POOL\_VXLAN | VXLAN Mode to use for the IPv6 Pool created at start up. \[Default: `Never`] | Always, CrossSubnet, Never | -| CALICO\_IPV6POOL\_NAT\_OUTGOING | Controls NAT Outgoing for the IPv6 Pool created at start up. \[Default: `false`] | boolean | -| CALICO\_IPV6POOL\_NODE\_SELECTOR | Controls the NodeSelector for the IPv6 Pool created at start up. \[Default: `all()`] | [selector](https://docs.tigera.io/calico-enterprise/latest/reference/resources/ippool#node-selector) | -| CALICO\_IPV4POOL\_DISABLE\_BGP\_EXPORT | Disable exporting routes over BGP for the IPv4 Pool created at start up. \[Default: `false`] | boolean | -| CALICO\_IPV6POOL\_DISABLE\_BGP\_EXPORT | Disable exporting routes over BGP for the IPv6 Pool created at start up. \[Default: `false`] | boolean | -| NO\_DEFAULT\_POOLS | Prevents Calico Enterprise from creating a default pool if one does not exist. \[Default: `false`] | boolean | +**Tab: Configuration file** -### Configuring BGP Networking[​](#configuring-bgp-networking) +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------- | +| Key | `NftablesRefreshInterval` | +| Description | Controls the interval at which Felix periodically refreshes the nftables rules. | +| Schema | Seconds (floating point) | +| Default | `180` (3m0s) | -BGP configuration for Calico nodes is normally configured through the [Node](https://docs.tigera.io/calico-enterprise/latest/reference/resources/node), [BGPConfiguration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/bgpconfig), and [BGPPeer](https://docs.tigera.io/calico-enterprise/latest/reference/resources/bgppeer) resources. `node` also exposes some options to allow setting certain fields on these objects, as described below. +**Tab: Environment variable** -| Environment | Description | Schema | -| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- | -| NODENAME | A unique identifier for this host. See [node name determination](#node-name-determination) for more details. | lowercase string | -| IP | The IPv4 address to assign this host or detection behavior at startup. Refer to [IP setting](#ip-setting) for the details of the behavior possible with this field. | IPv4 | -| IP6 | The IPv6 address to assign this host or detection behavior at startup. Refer to [IP setting](#ip-setting) for the details of the behavior possible with this field. | IPv6 | -| IP\_AUTODETECTION\_METHOD | The method to use to autodetect the IPv4 address for this host. This is only used when the IPv4 address is being autodetected. See [IP Autodetection methods](#ip-autodetection-methods) for details of the valid methods. \[Default: `first-found`] | string | -| IP6\_AUTODETECTION\_METHOD | The method to use to autodetect the IPv6 address for this host. This is only used when the IPv6 address is being autodetected. See [IP Autodetection methods](#ip-autodetection-methods) for details of the valid methods. \[Default: `first-found`] | string | -| AS | The AS number for this node. When specified, the value is saved in the node resource configuration for this host, overriding any previously configured value. When omitted, if an AS number has been previously configured in the node resource, that AS number is used for the peering. When omitted, if an AS number has not yet been configured in the node resource, the node will use the global value (see [example modifying Global BGP settings](https://docs.tigera.io/calico-enterprise/latest/networking/configuring/bgp) for details.) | int | -| CALICO\_ROUTER\_ID | Sets the `router id` to use for BGP if no IPv4 address is set on the node. For an IPv6-only system, this may be set to `hash`. It then uses the hash of the nodename to create a 4 byte router id. See note below. \[Default: \`\`] | string | -| CALICO\_K8S\_NODE\_REF | The name of the corresponding node object in the Kubernetes API. When set, used for correlating this node with events from the Kubernetes API. | string | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------- | +| Key | `FELIX_NFTABLESREFRESHINTERVAL` | +| Description | Controls the interval at which Felix periodically refreshes the nftables rules. | +| Schema | Seconds (floating point) | +| Default | `180` (3m0s) | -### Configuring Datastore Access[​](#configuring-datastore-access) + -| Environment | Description | Schema | -| --------------- | ------------------------------------------- | ------------------ | -| DATASTORE\_TYPE | Type of datastore. \[Default: `kubernetes`] | kubernetes, etcdv3 | +### Data plane: eBPF[​](#data-plane-ebpf) -#### Configuring Kubernetes Datastore Access[​](#configuring-kubernetes-datastore-access) +#### `BPFAttachType` -| Environment | Description | Schema | -| ------------------ | ------------------------------------------------------------------------------ | ------ | -| KUBECONFIG | When using the Kubernetes datastore, the location of a kubeconfig file to use. | string | -| K8S\_API\_ENDPOINT | Location of the Kubernetes API. Not required if using kubeconfig. | string | -| K8S\_CERT\_FILE | Location of a client certificate for accessing the Kubernetes API. | string | -| K8S\_KEY\_FILE | Location of a client key for accessing the Kubernetes API. | string | -| K8S\_CA\_FILE | Location of a CA for accessing the Kubernetes API. | string | + -> **SECONDARY:** When Calico Enterprise is configured to use the Kubernetes API as the datastore, the environments used for BGP configuration are ignored—this includes selection of the node AS number (AS) and all of the IP selection options (IP, IP6, IP\_AUTODETECTION\_METHOD, IP6\_AUTODETECTION\_METHOD). +**Tab: Configuration file** -### Configuring Logging[​](#configuring-logging) +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFAttachType` | +| Description | Controls how are the BPF programs at the network interfaces attached. By default `TCX` is used where available to enable easier coexistence with 3rd party programs. `TC` can force the legacy method of attaching via a qdisc. `TCX` falls back to `TC` if `TCX` is not available. | +| Schema | One of: `TCX`, `TC` (case insensitive) | +| Default | `TCX` | -| Environment | Description | Schema | -| ------------------------------ | -------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- | -| CALICO\_DISABLE\_FILE\_LOGGING | Disables logging to file. \[Default: "false"] | string | -| CALICO\_STARTUP\_LOGLEVEL | The log severity above which startup `node` logs are sent to the stdout. \[Default: `ERROR`] | DEBUG, INFO, WARNING, ERROR, CRITICAL, or NONE (case-insensitive) | +**Tab: Environment variable** -### Configuring CNI Plugin[​](#configuring-cni-plugin) +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFATTACHTYPE` | +| Description | Controls how are the BPF programs at the network interfaces attached. By default `TCX` is used where available to enable easier coexistence with 3rd party programs. `TC` can force the legacy method of attaching via a qdisc. `TCX` falls back to `TC` if `TCX` is not available. | +| Schema | One of: `TCX`, `TC` (case insensitive) | +| Default | `TCX` | -`node` has a few options that are configurable based on the CNI plugin and CNI plugin configuration used on the cluster. + -| Environment | Description | Schema | -| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------- | -| USE\_POD\_CIDR | Use the Kubernetes `Node.Spec.PodCIDR` field when using host-local IPAM. Requires Kubernetes API datastore. This field is required when using the Kubernetes API datastore with host-local IPAM. \[Default: false] | boolean | -| CALICO\_MANAGE\_CNI | Tells Calico to update the kubeconfig file at /host/etc/cni/net.d/calico-kubeconfig on credentials change. \[Default: true] | boolean | +#### `BPFCTLBLogFilter` -### Other Environment Variables[​](#other-environment-variables) + -| Environment | Description | Schema | -| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- | -| DISABLE\_NODE\_IP\_CHECK | Skips checks for duplicate Node IPs. This can reduce the load on the cluster when a large number of Nodes are restarting. \[Default: `false`] | boolean | -| WAIT\_FOR\_DATASTORE | Wait for connection to datastore before starting. If a successful connection is not made, node will shutdown. \[Default: `false`] | boolean | -| CALICO\_NETWORKING\_BACKEND | The networking backend to use. In `bird` mode, Calico will provide BGP networking using the BIRD BGP daemon; VXLAN networking can also be used. In `vxlan` mode, only VXLAN networking is provided; BIRD and BGP are disabled. If set to `none` (also known as policy-only mode), both BIRD and VXLAN are disabled. \[Default: `bird`] | bird, vxlan, none | -| CLUSTER\_TYPE | Contains comma delimited list of indicators about this cluster. e.g. k8s, mesos, kubeadm, canal, bgp | string | +**Tab: Configuration file** -## Appendix[​](#appendix) +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFCTLBLogFilter` | +| Description | Specifies, what is logged by connect time load balancer when BPFLogLevel is debug. Currently has to be specified as 'all' when BPFLogFilters is set to see CTLB logs. | +| Schema | One of: `all` (case insensitive) | +| Default | none | -### Node name determination[​](#node-name-determination) +**Tab: Environment variable** -The `node` must know the name of the node on which it is running. The node name is used to retrieve the [Node resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/node) configured for this node if it exists, or to create a new node resource representing the node if it does not. It is also used to associate the node with per-node [BGP configuration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/bgpconfig), [felix configuration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/felixconfig), and endpoints. +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFCTLBLOGFILTER` | +| Description | Specifies, what is logged by connect time load balancer when BPFLogLevel is debug. Currently has to be specified as 'all' when BPFLogFilters is set to see CTLB logs. | +| Schema | One of: `all` (case insensitive) | +| Default | none | -When launched, the `node` container sets the node name according to the following order of precedence: + -1. The value specified in the `NODENAME` environment variable, if set. -2. The value specified in `/var/lib/calico/nodename`, if it exists. -3. The value specified in the `HOSTNAME` environment variable, if set. -4. The hostname as returned by the operating system, converted to lowercase. +#### `BPFConnectTimeLoadBalancing` -Once the node has determined its name, the value will be cached in `/var/lib/calico/nodename` for future use. + -For example, if given the following conditions: +**Tab: Configuration file** -- `NODENAME=""` -- `/var/lib/calico/nodename` does not exist -- `HOSTNAME="host-A"` -- The operating system returns "host-A.internal.myorg.com" for the hostname +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFConnectTimeLoadBalancing` | +| Description | When in BPF mode, controls whether Felix installs the connect-time load balancer. The connect-time load balancer is required for the host to be able to reach Kubernetes services and it improves the performance of pod-to-service connections.When set to TCP, connect time load balancing is available only for services with TCP ports. | +| Schema | One of: `Disabled`, `Enabled`, `TCP` (case insensitive) | +| Default | `TCP` | -node will use "host-a" for its name and will write the value in `/var/lib/calico/nodename`. If node is then restarted, it will use the cached value of "host-a" read from the file on disk. +**Tab: Environment variable** -### IP setting[​](#ip-setting) +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFCONNECTTIMELOADBALANCING` | +| Description | When in BPF mode, controls whether Felix installs the connect-time load balancer. The connect-time load balancer is required for the host to be able to reach Kubernetes services and it improves the performance of pod-to-service connections.When set to TCP, connect time load balancing is available only for services with TCP ports. | +| Schema | One of: `Disabled`, `Enabled`, `TCP` (case insensitive) | +| Default | `TCP` | -The IP (for IPv4) and IP6 (for IPv6) environment variables are used to set, force autodetection, or disable auto detection of the address for the appropriate IP version for the node. When the environment variable is set, the address is saved in the [node resource configuration](https://docs.tigera.io/calico-enterprise/latest/reference/resources/node) for this host, overriding any previously configured value. + -calico/node will attempt to detect subnet information from the host, and augment the provided address if possible. +#### `BPFConnectTimeLoadBalancingEnabled` -#### IP setting special case values[​](#ip-setting-special-case-values) + -There are several special case values that can be set in the IP(6) environment variables, they are: +**Tab: Configuration file** -- Not set or empty string: Any previously set address on the node resource will be used. If no previous address is set on the node resource the two versions behave differently: +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFConnectTimeLoadBalancingEnabled` | +| Description | When in BPF mode, controls whether Felix installs the connection-time load balancer. The connect-time load balancer is required for the host to be able to reach Kubernetes services and it improves the performance of pod-to-service connections. The only reason to disable it is for debugging purposes.Deprecated: Use BPFConnectTimeLoadBalancing. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | none | - +**Tab: Environment variable** - - IP will do autodetection of the IPv4 address and set it on the node resource. - - IP6 will not do autodetection. +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFCONNECTTIMELOADBALANCINGENABLED` | +| Description | When in BPF mode, controls whether Felix installs the connection-time load balancer. The connect-time load balancer is required for the host to be able to reach Kubernetes services and it improves the performance of pod-to-service connections. The only reason to disable it is for debugging purposes.Deprecated: Use BPFConnectTimeLoadBalancing. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | none | -- `autodetect`: Autodetection will always be performed for the IP address and the detected address will overwrite any value configured in the node resource. + -- `none`: Autodetection will not be performed (this is useful to disable IPv4). +#### `BPFConntrackCleanupMode` -### IP autodetection methods[​](#ip-autodetection-methods) + -When Calico Enterprise is used for routing, each node must be configured with an IPv4 address and/or an IPv6 address that will be used to route between nodes. To eliminate node specific IP address configuration, the `node` container can be configured to autodetect these IP addresses. In many systems, there might be multiple physical interfaces on a host, or possibly multiple IP addresses configured on a physical interface. In these cases, there are multiple addresses to choose from and so autodetection of the correct address can be tricky. +**Tab: Configuration file** -The IP autodetection methods are provided to improve the selection of the correct address, by limiting the selection based on suitable criteria for your deployment. +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFConntrackCleanupMode` | +| Description | Controls how BPF conntrack entries are cleaned up. `Auto` will use a BPF program if supported, falling back to userspace if not. `Userspace` will always use the userspace cleanup code. `BPFProgram` will always use the BPF program (failing if not supported)./To be deprecated in future versions as conntrack map type changed to lru\_hash and userspace cleanup is the only mode that is supported. | +| Schema | One of: `Auto`, `BPFProgram`, `Userspace` (case insensitive) | +| Default | `Auto` | -The following sections describe the available IP autodetection methods. +**Tab: Environment variable** -#### first-found[​](#first-found) +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFCONNTRACKCLEANUPMODE` | +| Description | Controls how BPF conntrack entries are cleaned up. `Auto` will use a BPF program if supported, falling back to userspace if not. `Userspace` will always use the userspace cleanup code. `BPFProgram` will always use the BPF program (failing if not supported)./To be deprecated in future versions as conntrack map type changed to lru\_hash and userspace cleanup is the only mode that is supported. | +| Schema | One of: `Auto`, `BPFProgram`, `Userspace` (case insensitive) | +| Default | `Auto` | -The `first-found` option enumerates all interface IP addresses and returns the first valid IP address (based on IP version and type of address) on the first valid interface. Certain known "local" interfaces are omitted, such as the docker bridge. The order that both the interfaces and the IP addresses are listed is system dependent. + -This is the default detection method. However, since this method only makes a very simplified guess, it is recommended to either configure the node with a specific IP address, or to use one of the other detection methods. +#### `BPFConntrackLogLevel` -e.g. + -```text -IP_AUTODETECTION_METHOD=first-found +**Tab: Configuration file** -IP6_AUTODETECTION_METHOD=first-found -``` +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFConntrackLogLevel` | +| Description | Controls the log level of the BPF conntrack cleanup program, which runs periodically to clean up expired BPF conntrack entries. . | +| Schema | One of: `debug`, `off` (case insensitive) | +| Default | `off` | -#### kubernetes-internal-ip[​](#kubernetes-internal-ip) +**Tab: Environment variable** -The `kubernetes-internal-ip` method will select the first internal IP address listed in the Kubernetes node's `Status.Addresses` field +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFCONNTRACKLOGLEVEL` | +| Description | Controls the log level of the BPF conntrack cleanup program, which runs periodically to clean up expired BPF conntrack entries. . | +| Schema | One of: `debug`, `off` (case insensitive) | +| Default | `off` | -Example: + -```text -IP_AUTODETECTION_METHOD=kubernetes-internal-ip +#### `BPFConntrackTimeouts` -IP6_AUTODETECTION_METHOD=kubernetes-internal-ip -``` + -#### can-reach=DESTINATION[​](#can-reachdestination) +**Tab: Configuration file** -The `can-reach` method uses your local routing to determine which IP address will be used to reach the supplied destination. Both IP addresses and domain names may be used. +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `BPFConntrackTimeouts` | +| Description | BPFConntrackTimers overrides the default values for the specified conntrack timer if set. Each value can be either a duration or `Auto` to pick the value from a Linux conntrack timeout.Configurable timers are: CreationGracePeriod, TCPSynSent, TCPEstablished, TCPFinsSeen, TCPResetSeen, UDPTimeout, GenericTimeout, ICMPTimeout.Unset values are replaced by the default values with a warning log for incorrect values. | +| Schema | Comma-delimited list of key=value pairs | +| Default | `CreationGracePeriod=10s,TCPSynSent=20s,TCPEstablished=1h,TCPFinsSeen=Auto,TCPResetSeen=40s,UDPTimeout=60s,GenericTimeout=10m,ICMPTimeout=5s` | -Example using IP addresses: +**Tab: Environment variable** -```text -IP_AUTODETECTION_METHOD=can-reach=8.8.8.8 +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_BPFCONNTRACKTIMEOUTS` | +| Description | BPFConntrackTimers overrides the default values for the specified conntrack timer if set. Each value can be either a duration or `Auto` to pick the value from a Linux conntrack timeout.Configurable timers are: CreationGracePeriod, TCPSynSent, TCPEstablished, TCPFinsSeen, TCPResetSeen, UDPTimeout, GenericTimeout, ICMPTimeout.Unset values are replaced by the default values with a warning log for incorrect values. | +| Schema | Comma-delimited list of key=value pairs | +| Default | `CreationGracePeriod=10s,TCPSynSent=20s,TCPEstablished=1h,TCPFinsSeen=Auto,TCPResetSeen=40s,UDPTimeout=60s,GenericTimeout=10m,ICMPTimeout=5s` | -IP6_AUTODETECTION_METHOD=can-reach=2001:4860:4860::8888 -``` + -Example using domain names: +#### `BPFDNSPolicyMode` -```text -IP_AUTODETECTION_METHOD=can-reach=www.google.com + -IP6_AUTODETECTION_METHOD=can-reach=www.google.com -``` +**Tab: Configuration file** -#### interface=INTERFACE-REGEX[​](#interfaceinterface-regex) +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFDNSPolicyMode` | +| Description | Specifies how DNS policy programming will be handled. Inline - BPF parses DNS response inline with DNS response packet processing. This guarantees the DNS rules reflect any change immediately. NoDelay - Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs. | +| Schema | One of: `Inline`, `NoDelay` (case insensitive) | +| Default | `Inline` | -The `interface` method uses the supplied interface [regular expression](https://pkg.go.dev/regexp) to enumerate matching interfaces and to return the first IP address on the first matching interface. The order that both the interfaces and the IP addresses are listed is system dependent. +**Tab: Environment variable** -Example with valid IP address on interface eth0, eth1, eth2 etc.: +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFDNSPOLICYMODE` | +| Description | Specifies how DNS policy programming will be handled. Inline - BPF parses DNS response inline with DNS response packet processing. This guarantees the DNS rules reflect any change immediately. NoDelay - Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs. | +| Schema | One of: `Inline`, `NoDelay` (case insensitive) | +| Default | `Inline` | -```text -IP_AUTODETECTION_METHOD=interface=eth.* + -IP6_AUTODETECTION_METHOD=interface=eth.* -``` +#### `BPFDSROptoutCIDRs` -#### skip-interface=INTERFACE-REGEX[​](#skip-interfaceinterface-regex) + -The `skip-interface` method uses the supplied interface [regular expression](https://pkg.go.dev/regexp) to exclude interfaces and to return the first IP address on the first interface that does not match. The order that both the interfaces and the IP addresses are listed is system dependent. +**Tab: Configuration file** -Example with valid IP address on interface exclude enp6s0f0, eth0, eth1, eth2 etc.: +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFDSROptoutCIDRs` | +| Description | A list of CIDRs which are excluded from DSR. That is, clients in those CIDRs will access service node ports as if BPFExternalServiceMode was set to Tunnel. | +| Schema | Comma-delimited list of CIDRs | +| Default | none | -```text -IP_AUTODETECTION_METHOD=skip-interface=enp6s0f0,eth.* +**Tab: Environment variable** -IP6_AUTODETECTION_METHOD=skip-interface=enp6s0f0,eth.* -``` +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFDSROPTOUTCIDRS` | +| Description | A list of CIDRs which are excluded from DSR. That is, clients in those CIDRs will access service node ports as if BPFExternalServiceMode was set to Tunnel. | +| Schema | Comma-delimited list of CIDRs | +| Default | none | -#### cidr=CIDR[​](#cidrcidr) + -The `cidr` method will select any IP address from the node that falls within the given CIDRs. For example: +#### `BPFDataIfacePattern` -Example: + -```text -IP_AUTODETECTION_METHOD=cidr=10.0.1.0/24,10.0.2.0/24 +**Tab: Configuration file** -IP6_AUTODETECTION_METHOD=cidr=2001:4860::0/64 -``` +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFDataIfacePattern` | +| Description | A regular expression that controls which interfaces Felix should attach BPF programs to in order to catch traffic to/from the network. This needs to match the interfaces that Calico workload traffic flows over as well as any interfaces that handle incoming traffic to nodeports and services from outside the cluster. It should not match the workload interfaces (usually named cali...) or any other special device managed by Calico itself (e.g., tunnels). | +| Schema | Regular expression | +| Default | `^((en\|wl\|ww\|sl\|ib)[Popsx].*\|(eth\|wlan\|wwan\|bond).*)` | -### Node readiness[​](#node-readiness) +**Tab: Environment variable** -The `calico/node` container supports an exec readiness endpoint. +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFDATAIFACEPATTERN` | +| Description | A regular expression that controls which interfaces Felix should attach BPF programs to in order to catch traffic to/from the network. This needs to match the interfaces that Calico workload traffic flows over as well as any interfaces that handle incoming traffic to nodeports and services from outside the cluster. It should not match the workload interfaces (usually named cali...) or any other special device managed by Calico itself (e.g., tunnels). | +| Schema | Regular expression | +| Default | `^((en\|wl\|ww\|sl\|ib)[Popsx].*\|(eth\|wlan\|wwan\|bond).*)` | -To access this endpoint, use the following command. + -```bash -docker exec calico-node /bin/calico-node [flag] -``` +#### `BPFDisableGROForIfaces` -Substitute `[flag]` with one or more of the following. + -- `-bird-ready` -- `-bird6-ready` -- `-felix-ready` +**Tab: Configuration file** -The BIRD readiness endpoint ensures that the BGP mesh is healthy by verifying that all BGP peers are established and no graceful restart is in progress. If the BIRD readiness check is failing due to unreachable peers that are no longer in the cluster, see [decommissioning a node](https://docs.tigera.io/calico-enterprise/latest/operations/decommissioning-a-node). +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFDisableGROForIfaces` | +| Description | A regular expression that controls which interfaces Felix should disable the Generic Receive Offload \[GRO] option. It should not match the workload interfaces (usually named cali...). | +| Schema | Regular expression | +| Default | none | -### Setting `CALICO_ROUTER_ID` for IPv6 only system[​](#setting-calico_router_id-for-ipv6-only-system) +**Tab: Environment variable** -Setting CALICO\_ROUTER\_ID to value `hash` will use a hash of the configured nodename for the router ID. This should only be used in IPv6-only systems with no IPv4 address to use for the router ID. Since each node chooses its own router ID in isolation, it is possible for two nodes to pick the same ID resulting in a clash. The probability of such a clash grows with cluster size so this feature should not be used in a large cluster (500+ nodes). +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFDISABLEGROFORIFACES` | +| Description | A regular expression that controls which interfaces Felix should disable the Generic Receive Offload \[GRO] option. It should not match the workload interfaces (usually named cali...). | +| Schema | Regular expression | +| Default | none | -### Felix - - - -## [📄️Configuring Felix](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/configuration) +#### `BPFDisableUnprivileged` -[Configure Felix, the daemon that runs on every machine that provides endpoints.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/configuration) + -## [📄️Monitoring Felix with Prometheus](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/prometheus) +**Tab: Configuration file** -[Review metrics for the Felix component if you are using Prometheus.](https://docs.tigera.io/calico-enterprise/latest/reference/component-resources/node/felix/prometheus) +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFDisableUnprivileged` | +| Description | If enabled, Felix sets the kernel.unprivileged\_bpf\_disabled sysctl to disable unprivileged use of BPF. This ensures that unprivileged users cannot access Calico's BPF maps and cannot insert their own BPF programs to interfere with Calico's. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | -### Configuring Felix +**Tab: Environment variable** - +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFDISABLEUNPRIVILEGED` | +| Description | If enabled, Felix sets the kernel.unprivileged\_bpf\_disabled sysctl to disable unprivileged use of BPF. This ensures that unprivileged users cannot access Calico's BPF maps and cannot insert their own BPF programs to interfere with Calico's. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | -> **SECONDARY:** The following tables detail the configuration file and environment variable parameters. For `FelixConfiguration` resource settings, refer to [Felix Configuration Resource](https://docs.tigera.io/calico-enterprise/latest/reference/resources/felixconfig). + -Configuration for Felix is read from one of four possible locations, in order, as follows. +#### `BPFEnabled` -1. Environment variables. -2. The Felix configuration file. -3. Host-specific `FelixConfiguration` resources (`node.`). -4. The global `FelixConfiguration` resource (`default`). + -The value of any configuration parameter is the value read from the *first* location containing a value. For example, if an environment variable contains a value, it takes top precedence. +**Tab: Configuration file** -If not set in any of these locations, most configuration parameters have defaults, and it should be rare to have to explicitly set them. +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFEnabled` | +| Description | If enabled Felix will use the BPF dataplane. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | -The full list of parameters which can be set is as follows. +**Tab: Environment variable** -## Spec[​](#spec) +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFENABLED` | +| Description | If enabled Felix will use the BPF dataplane. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | -### Datastore connection[​](#datastore-connection) + -#### `DatastoreType` +#### `BPFEnforceRPF` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `DatastoreType` | -| Description | Controls which datastore driver Felix will use. Typically, this is detected from the environment and it does not need to be set manually. (For example, if `KUBECONFIG` is set, the kubernetes datastore driver will be used by default). | -| Schema | One of: `etcdv3`, `kubernetes` (case insensitive) | -| Default | `etcdv3` | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFEnforceRPF` | +| Description | Enforce strict RPF on all host interfaces with BPF programs regardless of what is the per-interfaces or global setting. Possible values are Disabled, Strict or Loose. | +| Schema | One of: `Disabled`, `Loose`, `Strict` (case insensitive) | +| Default | `Loose` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_DATASTORETYPE` | -| Description | Controls which datastore driver Felix will use. Typically, this is detected from the environment and it does not need to be set manually. (For example, if `KUBECONFIG` is set, the kubernetes datastore driver will be used by default). | -| Schema | One of: `etcdv3`, `kubernetes` (case insensitive) | -| Default | `etcdv3` | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFENFORCERPF` | +| Description | Enforce strict RPF on all host interfaces with BPF programs regardless of what is the per-interfaces or global setting. Possible values are Disabled, Strict or Loose. | +| Schema | One of: `Disabled`, `Loose`, `Strict` (case insensitive) | +| Default | `Loose` | -#### `EtcdAddr` +#### `BPFExcludeCIDRsFromNAT` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `EtcdAddr` | -| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, the etcd server and port to connect to. If EtcdEndpoints is also specified, it takes precedence. | -| Schema | String matching regex `^[^:/]+:\d+$` | -| Default | `127.0.0.1:2379` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `BPFExcludeCIDRsFromNAT` | +| Description | A list of CIDRs that are to be excluded from NAT resolution so that host can handle them. A typical usecase is node local DNS cache. | +| Schema | Comma-delimited list of CIDRs | +| Default | none | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `FELIX_ETCDADDR` | -| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, the etcd server and port to connect to. If EtcdEndpoints is also specified, it takes precedence. | -| Schema | String matching regex `^[^:/]+:\d+$` | -| Default | `127.0.0.1:2379` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_BPFEXCLUDECIDRSFROMNAT` | +| Description | A list of CIDRs that are to be excluded from NAT resolution so that host can handle them. A typical usecase is node local DNS cache. | +| Schema | Comma-delimited list of CIDRs | +| Default | none | -#### `EtcdCaFile` +#### `BPFExportBufferSizeMB` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `EtcdCaFile` | -| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, path to TLS CA file to use when connecting to etcd. If the CA file is specified, the other TLS parameters are mandatory. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------- | +| Key | `BPFExportBufferSizeMB` | +| Description | In BPF mode, controls the buffer size used for sending BPF events to felix. | +| Schema | Integer | +| Default | `1` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `FELIX_ETCDCAFILE` | -| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, path to TLS CA file to use when connecting to etcd. If the CA file is specified, the other TLS parameters are mandatory. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------- | +| Key | `FELIX_BPFEXPORTBUFFERSIZEMB` | +| Description | In BPF mode, controls the buffer size used for sending BPF events to felix. | +| Schema | Integer | +| Default | `1` | -#### `EtcdCertFile` +#### `BPFExtToServiceConnmark` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `EtcdCertFile` | -| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, path to TLS certificate file to use when connecting to etcd. If the certificate file is specified, the other TLS parameters are mandatory. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFExtToServiceConnmark` | +| Description | In BPF mode, controls a 32bit mark that is set on connections from an external client to a local service. This mark allows us to control how packets of that connection are routed within the host and how is routing interpreted by RPF check. | +| Schema | Integer | +| Default | `0` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `FELIX_ETCDCERTFILE` | -| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, path to TLS certificate file to use when connecting to etcd. If the certificate file is specified, the other TLS parameters are mandatory. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFEXTTOSERVICECONNMARK` | +| Description | In BPF mode, controls a 32bit mark that is set on connections from an external client to a local service. This mark allows us to control how packets of that connection are routed within the host and how is routing interpreted by RPF check. | +| Schema | Integer | +| Default | `0` | -#### `EtcdEndpoints` +#### `BPFExternalServiceMode` **Tab: Configuration file** -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `EtcdEndpoints` | -| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, comma-delimited list of etcd endpoints to connect to, replaces EtcdAddr and EtcdScheme. | -| Schema | List of HTTP endpoints: comma-delimited list of `http(s)://hostname:port` | -| Default | none | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFExternalServiceMode` | +| Description | In BPF mode, controls how connections from outside the cluster to services (node ports and cluster IPs) are forwarded to remote workloads. If set to "Tunnel" then both request and response traffic is tunneled to the remote node. If set to "DSR", the request traffic is tunneled but the response traffic is sent directly from the remote node. In "DSR" mode, the remote node appears to use the IP of the ingress node; this requires a permissive L2 network. | +| Schema | One of: `dsr`, `tunnel` (case insensitive) | +| Default | `tunnel` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_ETCDENDPOINTS` | -| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, comma-delimited list of etcd endpoints to connect to, replaces EtcdAddr and EtcdScheme. | -| Schema | List of HTTP endpoints: comma-delimited list of `http(s)://hostname:port` | -| Default | none | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFEXTERNALSERVICEMODE` | +| Description | In BPF mode, controls how connections from outside the cluster to services (node ports and cluster IPs) are forwarded to remote workloads. If set to "Tunnel" then both request and response traffic is tunneled to the remote node. If set to "DSR", the request traffic is tunneled but the response traffic is sent directly from the remote node. In "DSR" mode, the remote node appears to use the IP of the ingress node; this requires a permissive L2 network. | +| Schema | One of: `dsr`, `tunnel` (case insensitive) | +| Default | `tunnel` | -#### `EtcdKeyFile` +#### `BPFForceTrackPacketsFromIfaces` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `EtcdKeyFile` | -| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, path to TLS private key file to use when connecting to etcd. If the key file is specified, the other TLS parameters are mandatory. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFForceTrackPacketsFromIfaces` | +| Description | In BPF mode, forces traffic from these interfaces to skip Calico's iptables NOTRACK rule, allowing traffic from those interfaces to be tracked by Linux conntrack. Should only be used for interfaces that are not used for the Calico fabric. For example, a docker bridge device for non-Calico-networked containers. | +| Schema | Comma-delimited list of strings, each matching the regex `^[a-zA-Z0-9:._+-]{1,15}$` | +| Default | `docker+` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_ETCDKEYFILE` | -| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.When using the `etcdv3` datastore driver, path to TLS private key file to use when connecting to etcd. If the key file is specified, the other TLS parameters are mandatory. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFFORCETRACKPACKETSFROMIFACES` | +| Description | In BPF mode, forces traffic from these interfaces to skip Calico's iptables NOTRACK rule, allowing traffic from those interfaces to be tracked by Linux conntrack. Should only be used for interfaces that are not used for the Calico fabric. For example, a docker bridge device for non-Calico-networked containers. | +| Schema | Comma-delimited list of strings, each matching the regex `^[a-zA-Z0-9:._+-]{1,15}$` | +| Default | `docker+` | -#### `EtcdScheme` +#### `BPFHostConntrackBypass` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `EtcdScheme` | -| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.EtcdAddr: when using the `etcdv3` datastore driver, the URL scheme to use. If EtcdEndpoints is also specified, it takes precedence. | -| Schema | One of: `http`, `https` (case insensitive) | -| Default | `http` | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFHostConntrackBypass` | +| Description | Controls whether to bypass Linux conntrack in BPF mode for workloads and services. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_ETCDSCHEME` | -| Description | **Open source-only parameter**; `etcdv3` datastore driver is not supported in Calico Enterprise/Cloud.EtcdAddr: when using the `etcdv3` datastore driver, the URL scheme to use. If EtcdEndpoints is also specified, it takes precedence. | -| Schema | One of: `http`, `https` (case insensitive) | -| Default | `http` | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFHOSTCONNTRACKBYPASS` | +| Description | Controls whether to bypass Linux conntrack in BPF mode for workloads and services. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | -#### `FelixHostname` +#### `BPFHostNetworkedNATWithoutCTLB` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FelixHostname` | -| Description | The name of this node, used to identify resources in the datastore that belong to this node. Auto-detected from the node's hostname if not provided. | -| Schema | String matching regex `^[a-zA-Z0-9_.-]+$` | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFHostNetworkedNATWithoutCTLB` | +| Description | When in BPF mode, controls whether Felix does a NAT without CTLB. This along with BPFConnectTimeLoadBalancing determines the CTLB behavior. | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Enabled` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_FELIXHOSTNAME` | -| Description | The name of this node, used to identify resources in the datastore that belong to this node. Auto-detected from the node's hostname if not provided. | -| Schema | String matching regex `^[a-zA-Z0-9_.-]+$` | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFHOSTNETWORKEDNATWITHOUTCTLB` | +| Description | When in BPF mode, controls whether Felix does a NAT without CTLB. This along with BPFConnectTimeLoadBalancing determines the CTLB behavior. | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Enabled` | -#### `TyphaAddr` +#### `BPFKubeProxyEndpointSlicesEnabled` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------- | -| Key | `TyphaAddr` | -| Description | If set, tells Felix to connect to Typha at the given address and port. Overrides TyphaK8sServiceName. | -| Schema | String matching regex `^[^:/]+:\d+$` | -| Default | none | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFKubeProxyEndpointSlicesEnabled` | +| Description | Deprecated and has no effect. BPF kube-proxy always accepts endpoint slices. This option will be removed in the next release. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------- | -| Key | `FELIX_TYPHAADDR` | -| Description | If set, tells Felix to connect to Typha at the given address and port. Overrides TyphaK8sServiceName. | -| Schema | String matching regex `^[^:/]+:\d+$` | -| Default | none | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFKUBEPROXYENDPOINTSLICESENABLED` | +| Description | Deprecated and has no effect. BPF kube-proxy always accepts endpoint slices. This option will be removed in the next release. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | -#### `TyphaCAFile` +#### `BPFKubeProxyIptablesCleanupEnabled` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `TyphaCAFile` | -| Description | Path to the TLS CA file to use when communicating with Typha. If this parameter is specified, the other TLS parameters must also be specified. For non-cluster hosts, the CA file is extracted from the tigera-ca-bundle ConfigMap under the TyphaK8sNamespace namespace. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `BPFKubeProxyIptablesCleanupEnabled` | +| Description | If enabled in BPF mode, Felix will proactively clean up the upstream Kubernetes kube-proxy's iptables chains. Should only be enabled if kube-proxy is not running. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_TYPHACAFILE` | -| Description | Path to the TLS CA file to use when communicating with Typha. If this parameter is specified, the other TLS parameters must also be specified. For non-cluster hosts, the CA file is extracted from the tigera-ca-bundle ConfigMap under the TyphaK8sNamespace namespace. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_BPFKUBEPROXYIPTABLESCLEANUPENABLED` | +| Description | If enabled in BPF mode, Felix will proactively clean up the upstream Kubernetes kube-proxy's iptables chains. Should only be enabled if kube-proxy is not running. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | -#### `TyphaCN` +#### `BPFKubeProxyMinSyncPeriod` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `TyphaCN` | -| Description | Common name to use when authenticating to Typha over TLS. If any TLS parameters are specified then one of TyphaCN and TyphaURISAN must be set. | -| Schema | String | -| Default | none | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFKubeProxyMinSyncPeriod` | +| Description | In BPF mode, controls the minimum time between updates to the dataplane for Felix's embedded kube-proxy. Lower values give reduced set-up latency. Higher values reduce Felix CPU usage by batching up more work. | +| Schema | Seconds (floating point) | +| Default | `1` (1s) | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_TYPHACN` | -| Description | Common name to use when authenticating to Typha over TLS. If any TLS parameters are specified then one of TyphaCN and TyphaURISAN must be set. | -| Schema | String | -| Default | none | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFKUBEPROXYMINSYNCPERIOD` | +| Description | In BPF mode, controls the minimum time between updates to the dataplane for Felix's embedded kube-proxy. Lower values give reduced set-up latency. Higher values reduce Felix CPU usage by batching up more work. | +| Schema | Seconds (floating point) | +| Default | `1` (1s) | -#### `TyphaCertFile` +#### `BPFL3IfacePattern` **Tab: Configuration file** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `TyphaCertFile` | -| Description | Path to the TLS certificate to use when communicating with Typha. If this parameter is specified, the other TLS parameters must also be specified. For non-cluster hosts, the certificate will be signed by the in-cluster Tigera operator signer. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFL3IfacePattern` | +| Description | A regular expression that allows to list tunnel devices like wireguard or vxlan (i.e., L3 devices) in addition to BPFDataIfacePattern. That is, tunnel interfaces not created by Calico, that Calico workload traffic flows over as well as any interfaces that handle incoming traffic to nodeports and services from outside the cluster. | +| Schema | Regular expression | +| Default | none | **Tab: Environment variable** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_TYPHACERTFILE` | -| Description | Path to the TLS certificate to use when communicating with Typha. If this parameter is specified, the other TLS parameters must also be specified. For non-cluster hosts, the certificate will be signed by the in-cluster Tigera operator signer. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFL3IFACEPATTERN` | +| Description | A regular expression that allows to list tunnel devices like wireguard or vxlan (i.e., L3 devices) in addition to BPFDataIfacePattern. That is, tunnel interfaces not created by Calico, that Calico workload traffic flows over as well as any interfaces that handle incoming traffic to nodeports and services from outside the cluster. | +| Schema | Regular expression | +| Default | none | -#### `TyphaK8sNamespace` +#### `BPFLogFilters` **Tab: Configuration file** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------- | -| Key | `TyphaK8sNamespace` | -| Description | Namespace to look in when looking for Typha's service (see TyphaK8sServiceName). | -| Schema | String | -| Default | `kube-system` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFLogFilters` | +| Description | A map of key=values where the value is a pcap filter expression and the key is an interface name with 'all' denoting all interfaces, 'weps' all workload endpoints and 'heps' all host endpoints.When specified as an env var, it accepts a comma-separated list of key=values. | +| Schema | Comma-delimited list of key=value pairs | +| Default | none | **Tab: Environment variable** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------- | -| Key | `FELIX_TYPHAK8SNAMESPACE` | -| Description | Namespace to look in when looking for Typha's service (see TyphaK8sServiceName). | -| Schema | String | -| Default | `kube-system` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFLOGFILTERS` | +| Description | A map of key=values where the value is a pcap filter expression and the key is an interface name with 'all' denoting all interfaces, 'weps' all workload endpoints and 'heps' all host endpoints.When specified as an env var, it accepts a comma-separated list of key=values. | +| Schema | Comma-delimited list of key=value pairs | +| Default | none | -#### `TyphaK8sServiceName` +#### `BPFLogLevel` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `TyphaK8sServiceName` | -| Description | If set, tells Felix to connect to Typha by looking up the Endpoints of the given Kubernetes Service in namespace specified by TyphaK8sNamespace. | -| Schema | String | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFLogLevel` | +| Description | Controls the log level of the BPF programs when in BPF dataplane mode. One of "Off", "Info", or "Debug". The logs are emitted to the BPF trace pipe, accessible with the command `tc exec bpf debug`. . | +| Schema | One of: `debug`, `info`, `off` (case insensitive) | +| Default | `off` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `FELIX_TYPHAK8SSERVICENAME` | -| Description | If set, tells Felix to connect to Typha by looking up the Endpoints of the given Kubernetes Service in namespace specified by TyphaK8sNamespace. | -| Schema | String | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFLOGLEVEL` | +| Description | Controls the log level of the BPF programs when in BPF dataplane mode. One of "Off", "Info", or "Debug". The logs are emitted to the BPF trace pipe, accessible with the command `tc exec bpf debug`. . | +| Schema | One of: `debug`, `info`, `off` (case insensitive) | +| Default | `off` | -#### `TyphaKeyFile` +#### `BPFMapSizeConntrack` **Tab: Configuration file** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `TyphaKeyFile` | -| Description | Path to the TLS private key to use when communicating with Typha. If this parameter is specified, the other TLS parameters must also be specified. For non-cluster hosts, the private key is generated locally and rotated when the certificate expires. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFMapSizeConntrack` | +| Description | Sets the size for the conntrack map. This map must be large enough to hold an entry for each active connection. Warning: changing the size of the conntrack map can cause disruption. | +| Schema | Integer | +| Default | `512000` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_TYPHAKEYFILE` | -| Description | Path to the TLS private key to use when communicating with Typha. If this parameter is specified, the other TLS parameters must also be specified. For non-cluster hosts, the private key is generated locally and rotated when the certificate expires. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFMAPSIZECONNTRACK` | +| Description | Sets the size for the conntrack map. This map must be large enough to hold an entry for each active connection. Warning: changing the size of the conntrack map can cause disruption. | +| Schema | Integer | +| Default | `512000` | -#### `TyphaReadTimeout` +#### `BPFMapSizeConntrackCleanupQueue` **Tab: Configuration file** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `TyphaReadTimeout` | -| Description | Read timeout when reading from the Typha connection. If typha sends no data for this long, Felix will exit and restart. (Note that Typha sends regular pings so traffic is always expected.) | -| Schema | Seconds (floating point) | -| Default | `30` | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFMapSizeConntrackCleanupQueue` | +| Description | Sets the size for the map used to hold NAT conntrack entries that are queued for cleanup. This should be big enough to hold all the NAT entries that expire within one cleanup interval. | +| Schema | Integer | +| Default | `100000` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_TYPHAREADTIMEOUT` | -| Description | Read timeout when reading from the Typha connection. If typha sends no data for this long, Felix will exit and restart. (Note that Typha sends regular pings so traffic is always expected.) | -| Schema | Seconds (floating point) | -| Default | `30` | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFMAPSIZECONNTRACKCLEANUPQUEUE` | +| Description | Sets the size for the map used to hold NAT conntrack entries that are queued for cleanup. This should be big enough to hold all the NAT entries that expire within one cleanup interval. | +| Schema | Integer | +| Default | `100000` | -#### `TyphaURISAN` +#### `BPFMapSizeConntrackScaling` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `TyphaURISAN` | -| Description | URI SAN to use when authenticating to Typha over TLS. If any TLS parameters are specified then one of TyphaCN and TyphaURISAN must be set. | -| Schema | String | -| Default | none | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFMapSizeConntrackScaling` | +| Description | Controls whether and how we scale the conntrack map size depending on its usage. 'Disabled' make the size stay at the default or whatever is set by BPFMapSizeConntrack\*. 'DoubleIfFull' doubles the size when the map is pretty much full even after cleanups. | +| Schema | One of: `Disabled`, `DoubleIfFull` (case insensitive) | +| Default | `DoubleIfFull` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `FELIX_TYPHAURISAN` | -| Description | URI SAN to use when authenticating to Typha over TLS. If any TLS parameters are specified then one of TyphaCN and TyphaURISAN must be set. | -| Schema | String | -| Default | none | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFMAPSIZECONNTRACKSCALING` | +| Description | Controls whether and how we scale the conntrack map size depending on its usage. 'Disabled' make the size stay at the default or whatever is set by BPFMapSizeConntrack\*. 'DoubleIfFull' doubles the size when the map is pretty much full even after cleanups. | +| Schema | One of: `Disabled`, `DoubleIfFull` (case insensitive) | +| Default | `DoubleIfFull` | -#### `TyphaWriteTimeout` +#### `BPFMapSizeIPSets` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ----------------------------------------- | -| Key | `TyphaWriteTimeout` | -| Description | Write timeout when writing data to Typha. | -| Schema | Seconds (floating point) | -| Default | `10` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFMapSizeIPSets` | +| Description | Sets the size for ipsets map. The IP sets map must be large enough to hold an entry for each endpoint matched by every selector in the source/destination matches in network policy. Selectors such as "all()" can result in large numbers of entries (one entry per endpoint in that case). | +| Schema | Integer | +| Default | `1048576` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ----------------------------------------- | -| Key | `FELIX_TYPHAWRITETIMEOUT` | -| Description | Write timeout when writing data to Typha. | -| Schema | Seconds (floating point) | -| Default | `10` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFMAPSIZEIPSETS` | +| Description | Sets the size for ipsets map. The IP sets map must be large enough to hold an entry for each endpoint matched by every selector in the source/destination matches in network policy. Selectors such as "all()" can result in large numbers of entries (one entry per endpoint in that case). | +| Schema | Integer | +| Default | `1048576` | -### Process: Feature detection/overrides[​](#process-feature-detectionoverrides) - -#### `FeatureDetectOverride` +#### `BPFMapSizeIfState` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `FeatureDetectOverride` | -| Description | Used to override feature detection based on auto-detected platform capabilities. Values are specified in a comma separated list with no spaces, example; "SNATFullyRandom=true,MASQFullyRandom=false,RestoreSupportsLock=". A value of "true" or "false" will force enable/disable feature, empty or omitted values fall back to auto-detection. | -| Schema | Comma-delimited list of key=value pairs | -| Default | none | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFMapSizeIfState` | +| Description | Sets the size for ifstate map. The ifstate map must be large enough to hold an entry for each device (host + workloads) on a host. | +| Schema | Integer | +| Default | `1000` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `FELIX_FEATUREDETECTOVERRIDE` | -| Description | Used to override feature detection based on auto-detected platform capabilities. Values are specified in a comma separated list with no spaces, example; "SNATFullyRandom=true,MASQFullyRandom=false,RestoreSupportsLock=". A value of "true" or "false" will force enable/disable feature, empty or omitted values fall back to auto-detection. | -| Schema | Comma-delimited list of key=value pairs | -| Default | none | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFMAPSIZEIFSTATE` | +| Description | Sets the size for ifstate map. The ifstate map must be large enough to hold an entry for each device (host + workloads) on a host. | +| Schema | Integer | +| Default | `1000` | -#### `FeatureGates` +#### `BPFMapSizeNATAffinity` **Tab: Configuration file** -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FeatureGates` | -| Description | Used to enable or disable tech-preview Calico features. Values are specified in a comma separated list with no spaces, example; "BPFConnectTimeLoadBalancingWorkaround=enabled,XyZ=false". This is used to enable features that are not fully production ready. | -| Schema | Comma-delimited list of key=value pairs | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------- | +| Key | `BPFMapSizeNATAffinity` | +| Description | Sets the size of the BPF map that stores the affinity of a connection (for services that enable that feature. | +| Schema | Integer | +| Default | `65536` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_FEATUREGATES` | -| Description | Used to enable or disable tech-preview Calico features. Values are specified in a comma separated list with no spaces, example; "BPFConnectTimeLoadBalancingWorkaround=enabled,XyZ=false". This is used to enable features that are not fully production ready. | -| Schema | Comma-delimited list of key=value pairs | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFMAPSIZENATAFFINITY` | +| Description | Sets the size of the BPF map that stores the affinity of a connection (for services that enable that feature. | +| Schema | Integer | +| Default | `65536` | -### Process: Go runtime[​](#process-go-runtime) - -#### `GoGCThreshold` +#### `BPFMapSizeNATBackend` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `GoGCThreshold` | -| Description | Sets the Go runtime's garbage collection threshold. I.e. the percentage that the heap is allowed to grow before garbage collection is triggered. In general, doubling the value halves the CPU time spent doing GC, but it also doubles peak GC memory overhead. A special value of -1 can be used to disable GC entirely; this should only be used in conjunction with the GoMemoryLimitMB setting.This setting is overridden by the GOGC environment variable. | -| Schema | Integer: \[-1,263-1] | -| Default | `40` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFMapSizeNATBackend` | +| Description | Sets the size for NAT back end map. This is the total number of endpoints. This is mostly more than the size of the number of services. | +| Schema | Integer | +| Default | `262144` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_GOGCTHRESHOLD` | -| Description | Sets the Go runtime's garbage collection threshold. I.e. the percentage that the heap is allowed to grow before garbage collection is triggered. In general, doubling the value halves the CPU time spent doing GC, but it also doubles peak GC memory overhead. A special value of -1 can be used to disable GC entirely; this should only be used in conjunction with the GoMemoryLimitMB setting.This setting is overridden by the GOGC environment variable. | -| Schema | Integer: \[-1,263-1] | -| Default | `40` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFMAPSIZENATBACKEND` | +| Description | Sets the size for NAT back end map. This is the total number of endpoints. This is mostly more than the size of the number of services. | +| Schema | Integer | +| Default | `262144` | -#### `GoMaxProcs` +#### `BPFMapSizeNATFrontend` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `GoMaxProcs` | -| Description | Sets the maximum number of CPUs that the Go runtime will use concurrently. A value of -1 means "use the system default"; typically the number of real CPUs on the system.this setting is overridden by the GOMAXPROCS environment variable. | -| Schema | Integer: \[-1,263-1] | -| Default | `-1` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `BPFMapSizeNATFrontend` | +| Description | Sets the size for NAT front end map. FrontendMap should be large enough to hold an entry for each nodeport, external IP and each port in each service. | +| Schema | Integer | +| Default | `65536` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_GOMAXPROCS` | -| Description | Sets the maximum number of CPUs that the Go runtime will use concurrently. A value of -1 means "use the system default"; typically the number of real CPUs on the system.this setting is overridden by the GOMAXPROCS environment variable. | -| Schema | Integer: \[-1,263-1] | -| Default | `-1` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_BPFMAPSIZENATFRONTEND` | +| Description | Sets the size for NAT front end map. FrontendMap should be large enough to hold an entry for each nodeport, external IP and each port in each service. | +| Schema | Integer | +| Default | `65536` | -#### `GoMemoryLimitMB` +#### `BPFMapSizePerCPUConntrack` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `GoMemoryLimitMB` | -| Description | Sets a (soft) memory limit for the Go runtime in MB. The Go runtime will try to keep its memory usage under the limit by triggering GC as needed. To avoid thrashing, it will exceed the limit if GC starts to take more than 50% of the process's CPU time. A value of -1 disables the memory limit.Note that the memory limit, if used, must be considerably less than any hard resource limit set at the container or pod level. This is because felix is not the only process that must run in the container or pod.This setting is overridden by the GOMEMLIMIT environment variable. | -| Schema | Integer: \[-1,263-1] | -| Default | `-1` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFMapSizePerCPUConntrack` | +| Description | Determines the size of conntrack map based on the number of CPUs. If set to a non-zero value, overrides BPFMapSizeConntrack with `BPFMapSizePerCPUConntrack * (Number of CPUs)`. This map must be large enough to hold an entry for each active connection. Warning: changing the size of the conntrack map can cause disruption. | +| Schema | Integer | +| Default | `0` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `FELIX_GOMEMORYLIMITMB` | -| Description | Sets a (soft) memory limit for the Go runtime in MB. The Go runtime will try to keep its memory usage under the limit by triggering GC as needed. To avoid thrashing, it will exceed the limit if GC starts to take more than 50% of the process's CPU time. A value of -1 disables the memory limit.Note that the memory limit, if used, must be considerably less than any hard resource limit set at the container or pod level. This is because felix is not the only process that must run in the container or pod.This setting is overridden by the GOMEMLIMIT environment variable. | -| Schema | Integer: \[-1,263-1] | -| Default | `-1` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFMAPSIZEPERCPUCONNTRACK` | +| Description | Determines the size of conntrack map based on the number of CPUs. If set to a non-zero value, overrides BPFMapSizeConntrack with `BPFMapSizePerCPUConntrack * (Number of CPUs)`. This map must be large enough to hold an entry for each active connection. Warning: changing the size of the conntrack map can cause disruption. | +| Schema | Integer | +| Default | `0` | -### Process: Health port and timeouts[​](#process-health-port-and-timeouts) - -#### `HealthEnabled` +#### `BPFMapSizeRoute` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | -| Key | `HealthEnabled` | -| Description | If set to true, enables Felix's health port, which provides readiness and liveness endpoints. | -| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | -| Default | `false` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFMapSizeRoute` | +| Description | Sets the size for the routes map. The routes map should be large enough to hold one entry per workload and a handful of entries per host (enough to cover its own IPs and tunnel IPs). | +| Schema | Integer | +| Default | `262144` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_HEALTHENABLED` | -| Description | If set to true, enables Felix's health port, which provides readiness and liveness endpoints. | -| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | -| Default | `false` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFMAPSIZEROUTE` | +| Description | Sets the size for the routes map. The routes map should be large enough to hold one entry per workload and a handful of entries per host (enough to cover its own IPs and tunnel IPs). | +| Schema | Integer | +| Default | `262144` | -#### `HealthHost` +#### `BPFPSNATPorts` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------ | -| Key | `HealthHost` | -| Description | The host that the health server should bind to. | -| Schema | String matching regex `^[a-zA-Z0-9:._+-]{1,64}$` | -| Default | `localhost` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFPSNATPorts` | +| Description | Sets the range from which we randomly pick a port if there is a source port collision. This should be within the ephemeral range as defined by RFC 6056 (1024–65535) and preferably outside the ephemeral ranges used by common operating systems. Linux uses 32768–60999, while others mostly use the IANA defined range 49152–65535. It is not necessarily a problem if this range overlaps with the operating systems. Both ends of the range are inclusive. | +| Schema | Port range: either a single number in \[0,65535] or a range of numbers `n:m` | +| Default | `20000:29999` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------ | -| Key | `FELIX_HEALTHHOST` | -| Description | The host that the health server should bind to. | -| Schema | String matching regex `^[a-zA-Z0-9:._+-]{1,64}$` | -| Default | `localhost` | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFPSNATPORTS` | +| Description | Sets the range from which we randomly pick a port if there is a source port collision. This should be within the ephemeral range as defined by RFC 6056 (1024–65535) and preferably outside the ephemeral ranges used by common operating systems. Linux uses 32768–60999, while others mostly use the IANA defined range 49152–65535. It is not necessarily a problem if this range overlaps with the operating systems. Both ends of the range are inclusive. | +| Schema | Port range: either a single number in \[0,65535] or a range of numbers `n:m` | +| Default | `20000:29999` | -#### `HealthPort` +#### `BPFPolicyDebugEnabled` **Tab: Configuration file** -| Attribute | Value | -| ----------- | --------------------------------------------------- | -| Key | `HealthPort` | -| Description | The TCP port that the health server should bind to. | -| Schema | Integer: \[0,65535] | -| Default | `9099` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFPolicyDebugEnabled` | +| Description | When true, Felix records detailed information about the BPF policy programs, which can be examined with the calico-bpf command-line tool. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | --------------------------------------------------- | -| Key | `FELIX_HEALTHPORT` | -| Description | The TCP port that the health server should bind to. | -| Schema | Integer: \[0,65535] | -| Default | `9099` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFPOLICYDEBUGENABLED` | +| Description | When true, Felix records detailed information about the BPF policy programs, which can be examined with the calico-bpf command-line tool. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `true` | -#### `HealthTimeoutOverrides` +#### `BPFProfiling` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `HealthTimeoutOverrides` | -| Description | Allows the internal watchdog timeouts of individual subcomponents to be overridden. This is useful for working around "false positive" liveness timeouts that can occur in particularly stressful workloads or if CPU is constrained. For a list of active subcomponents, see Felix's logs. | -| Schema | Comma-delimited list of `=` pairs, where durations use Go's standard format (e.g. 1s, 1m, 1h3m2s) | -| Default | none | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------- | +| Key | `BPFProfiling` | +| Description | Controls profiling of BPF programs. At the monent, it can be Disabled or Enabled. | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Disabled` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_HEALTHTIMEOUTOVERRIDES` | -| Description | Allows the internal watchdog timeouts of individual subcomponents to be overridden. This is useful for working around "false positive" liveness timeouts that can occur in particularly stressful workloads or if CPU is constrained. For a list of active subcomponents, see Felix's logs. | -| Schema | Comma-delimited list of `=` pairs, where durations use Go's standard format (e.g. 1s, 1m, 1h3m2s) | -| Default | none | +| Attribute | Value | +| ----------- | --------------------------------------------------------------------------------- | +| Key | `FELIX_BPFPROFILING` | +| Description | Controls profiling of BPF programs. At the monent, it can be Disabled or Enabled. | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Disabled` | -### Process: Logging[​](#process-logging) - -#### `LogDebugFilenameRegex` +#### `BPFRedirectToPeer` **Tab: Configuration file** -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `LogDebugFilenameRegex` | -| Description | Controls which source code files have their Debug log output included in the logs. Only logs from files with names that match the given regular expression are included. The filter only applies to Debug level logs. | -| Schema | Regular expression | -| Default | none | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFRedirectToPeer` | +| Description | Controls which whether it is allowed to forward straight to the peer side of the workload devices. It is allowed for any host L2 devices by default (L2Only), but it breaks TCP dump on the host side of workload device as it bypasses it on ingress. Value of Enabled also allows redirection from L3 host devices like IPIP tunnel or Wireguard directly to the peer side of the workload's device. This makes redirection faster, however, it breaks tools like tcpdump on the peer side. Use Enabled with caution. | +| Schema | One of: `Disabled`, `Enabled`, `L2Only` (case insensitive) | +| Default | `Disabled` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_LOGDEBUGFILENAMEREGEX` | -| Description | Controls which source code files have their Debug log output included in the logs. Only logs from files with names that match the given regular expression are included. The filter only applies to Debug level logs. | -| Schema | Regular expression | -| Default | none | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFREDIRECTTOPEER` | +| Description | Controls which whether it is allowed to forward straight to the peer side of the workload devices. It is allowed for any host L2 devices by default (L2Only), but it breaks TCP dump on the host side of workload device as it bypasses it on ingress. Value of Enabled also allows redirection from L3 host devices like IPIP tunnel or Wireguard directly to the peer side of the workload's device. This makes redirection faster, however, it breaks tools like tcpdump on the peer side. Use Enabled with caution. | +| Schema | One of: `Disabled`, `Enabled`, `L2Only` (case insensitive) | +| Default | `Disabled` | -#### `LogDropActionOverride` +### Data plane: Windows[​](#data-plane-windows) + +#### `WindowsDNSCacheFile` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | -| Key | `LogDropActionOverride` | -| Description | Specifies whether or not to include the DropActionOverride in the logs when it is triggered. | -| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | -| Default | `false` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------ | +| Key | `WindowsDNSCacheFile` | +| Description | The name of the file that Felix uses to preserve learnt DNS information when restarting. . | +| Schema | Path to file | +| Default | `c:\TigeraCalico\felix-dns-cache.txt` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_LOGDROPACTIONOVERRIDE` | -| Description | Specifies whether or not to include the DropActionOverride in the logs when it is triggered. | -| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | -| Default | `false` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------ | +| Key | `FELIX_WINDOWSDNSCACHEFILE` | +| Description | The name of the file that Felix uses to preserve learnt DNS information when restarting. . | +| Schema | Path to file | +| Default | `c:\TigeraCalico\felix-dns-cache.txt` | -#### `LogFilePath` +#### `WindowsDNSExtraTTL` **Tab: Configuration file** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------- | -| Key | `LogFilePath` | -| Description | The full path to the Felix log. Set to none to disable file logging. | -| Schema | Path to file | -| Default | `/var/log/calico/felix.log` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `WindowsDNSExtraTTL` | +| Description | Extra time to keep IPs and alias names that are learnt from DNS, in addition to each name or IP's advertised TTL. The default value is 120s which is same as the default value of ServicePointManager.DnsRefreshTimeout on .net framework. . | +| Schema | Seconds (floating point) | +| Default | `120` (2m0s) | **Tab: Environment variable** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------- | -| Key | `FELIX_LOGFILEPATH` | -| Description | The full path to the Felix log. Set to none to disable file logging. | -| Schema | Path to file | -| Default | `/var/log/calico/felix.log` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_WINDOWSDNSEXTRATTL` | +| Description | Extra time to keep IPs and alias names that are learnt from DNS, in addition to each name or IP's advertised TTL. The default value is 120s which is same as the default value of ServicePointManager.DnsRefreshTimeout on .net framework. . | +| Schema | Seconds (floating point) | +| Default | `120` (2m0s) | -#### `LogPrefix` +#### `WindowsFlowLogsFileDirectory` **Tab: Configuration file** -| Attribute | Value | -| ----------- | -------------------------------------------------------- | -| Key | `LogPrefix` | -| Description | The log prefix that Felix uses when rendering LOG rules. | -| Schema | String | -| Default | `calico-packet` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------- | +| Key | `WindowsFlowLogsFileDirectory` | +| Description | Sets the directory where flow logs files are stored on Windows nodes. . | +| Schema | String | +| Default | `c:\TigeraCalico\flowlogs` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | -------------------------------------------------------- | -| Key | `FELIX_LOGPREFIX` | -| Description | The log prefix that Felix uses when rendering LOG rules. | -| Schema | String | -| Default | `calico-packet` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------- | +| Key | `FELIX_WINDOWSFLOWLOGSFILEDIRECTORY` | +| Description | Sets the directory where flow logs files are stored on Windows nodes. . | +| Schema | String | +| Default | `c:\TigeraCalico\flowlogs` | -#### `LogSeverityFile` +#### `WindowsFlowLogsPositionFilePath` **Tab: Configuration file** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------- | -| Key | `LogSeverityFile` | -| Description | The log severity above which logs are sent to the log file. | -| Schema | One of: `DEBUG`, `ERROR`, `FATAL`, `INFO`, `TRACE`, `WARNING` (case insensitive) | -| Default | `INFO` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `WindowsFlowLogsPositionFilePath` | +| Description | Used to specify the position of the external pipeline that reads flow logs on Windows nodes. . This parameter only takes effect when FlowLogsDynamicAggregationEnabled is set to true. | +| Schema | String | +| Default | `c:\TigeraCalico\flowlogs\flows.log.pos` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------- | -| Key | `FELIX_LOGSEVERITYFILE` | -| Description | The log severity above which logs are sent to the log file. | -| Schema | One of: `DEBUG`, `ERROR`, `FATAL`, `INFO`, `TRACE`, `WARNING` (case insensitive) | -| Default | `INFO` | +| Attribute | Value | +| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_WINDOWSFLOWLOGSPOSITIONFILEPATH` | +| Description | Used to specify the position of the external pipeline that reads flow logs on Windows nodes. . This parameter only takes effect when FlowLogsDynamicAggregationEnabled is set to true. | +| Schema | String | +| Default | `c:\TigeraCalico\flowlogs\flows.log.pos` | -#### `LogSeverityScreen` +#### `WindowsManageFirewallRules` **Tab: Configuration file** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------- | -| Key | `LogSeverityScreen` | -| Description | The log severity above which logs are sent to the stdout. | -| Schema | One of: `DEBUG`, `ERROR`, `FATAL`, `INFO`, `TRACE`, `WARNING` (case insensitive) | -| Default | `INFO` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------- | +| Key | `WindowsManageFirewallRules` | +| Description | Configures whether or not Felix will program Windows Firewall rules (to allow inbound access to its own metrics ports). | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Disabled` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | -------------------------------------------------------------------------------- | -| Key | `FELIX_LOGSEVERITYSCREEN` | -| Description | The log severity above which logs are sent to the stdout. | -| Schema | One of: `DEBUG`, `ERROR`, `FATAL`, `INFO`, `TRACE`, `WARNING` (case insensitive) | -| Default | `INFO` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_WINDOWSMANAGEFIREWALLRULES` | +| Description | Configures whether or not Felix will program Windows Firewall rules (to allow inbound access to its own metrics ports). | +| Schema | One of: `Disabled`, `Enabled` (case insensitive) | +| Default | `Disabled` | -#### `LogSeveritySys` +#### `WindowsNetworkName` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------- | -| Key | `LogSeveritySys` | -| Description | The log severity above which logs are sent to the syslog. Set to None for no logging to syslog. | -| Schema | One of: `DEBUG`, `ERROR`, `FATAL`, `INFO`, `TRACE`, `WARNING` (case insensitive) | -| Default | `INFO` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `WindowsNetworkName` | +| Description | Specifies which Windows HNS networks Felix should operate on. The default is to match networks that start with "calico". Supports regular expression syntax. | +| Schema | Regular expression | +| Default | `(?i)calico.*` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------- | -| Key | `FELIX_LOGSEVERITYSYS` | -| Description | The log severity above which logs are sent to the syslog. Set to None for no logging to syslog. | -| Schema | One of: `DEBUG`, `ERROR`, `FATAL`, `INFO`, `TRACE`, `WARNING` (case insensitive) | -| Default | `INFO` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_WINDOWSNETWORKNAME` | +| Description | Specifies which Windows HNS networks Felix should operate on. The default is to match networks that start with "calico". Supports regular expression syntax. | +| Schema | Regular expression | +| Default | `(?i)calico.*` | -### Process: Prometheus metrics[​](#process-prometheus-metrics) - -#### `PrometheusGoMetricsEnabled` +#### `WindowsStatsDumpFilePath` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `PrometheusGoMetricsEnabled` | -| Description | Disables Go runtime metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | -| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | -| Default | `true` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------- | +| Key | `WindowsStatsDumpFilePath` | +| Description | Used to specify the path of the stats dump file on Windows nodes. | +| Schema | Path to file | +| Default | `c:\TigeraCalico\stats\dump` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `FELIX_PROMETHEUSGOMETRICSENABLED` | -| Description | Disables Go runtime metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | -| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | -| Default | `true` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------- | +| Key | `FELIX_WINDOWSSTATSDUMPFILEPATH` | +| Description | Used to specify the path of the stats dump file on Windows nodes. | +| Schema | Path to file | +| Default | `c:\TigeraCalico\stats\dump` | -#### `PrometheusMetricsCAFile` +### Data plane: OpenStack support[​](#data-plane-openstack-support) + +#### `EndpointReportingDelaySecs` **Tab: Configuration file** -| Attribute | Value | -| ----------- | -------------------------------------------------------------- | -| Key | `PrometheusMetricsCAFile` | -| Description | The path to the TLS CA file for the Prometheus metrics server. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `EndpointReportingDelaySecs` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The delay before Felix reports endpoint status to the datastore. This is only used by the OpenStack integration. | +| Schema | Seconds (floating point) | +| Default | `1` (1s) | **Tab: Environment variable** -| Attribute | Value | -| ----------- | -------------------------------------------------------------- | -| Key | `FELIX_PROMETHEUSMETRICSCAFILE` | -| Description | The path to the TLS CA file for the Prometheus metrics server. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_ENDPOINTREPORTINGDELAYSECS` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The delay before Felix reports endpoint status to the datastore. This is only used by the OpenStack integration. | +| Schema | Seconds (floating point) | +| Default | `1` (1s) | -#### `PrometheusMetricsCertFile` +#### `EndpointReportingEnabled` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------- | -| Key | `PrometheusMetricsCertFile` | -| Description | The path to the TLS certificate file for the Prometheus metrics server. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `EndpointReportingEnabled` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.Controls whether Felix reports endpoint status to the datastore. This is only used by the OpenStack integration. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------- | -| Key | `FELIX_PROMETHEUSMETRICSCERTFILE` | -| Description | The path to the TLS certificate file for the Prometheus metrics server. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_ENDPOINTREPORTINGENABLED` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.Controls whether Felix reports endpoint status to the datastore. This is only used by the OpenStack integration. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | -#### `PrometheusMetricsEnabled` +#### `MetadataAddr` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | -| Key | `PrometheusMetricsEnabled` | -| Description | Enables the Prometheus metrics server in Felix if set to true. | -| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | -| Default | `false` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `MetadataAddr` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The IP address or domain name of the server that can answer VM queries for cloud-init metadata. In OpenStack, this corresponds to the machine running nova-api (or in Ubuntu, nova-api-metadata). A value of none (case-insensitive) means that Felix should not set up any NAT rule for the metadata path. | +| Schema | String matching regex `^[a-zA-Z0-9_.-]+$` | +| Default | `127.0.0.1` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_PROMETHEUSMETRICSENABLED` | -| Description | Enables the Prometheus metrics server in Felix if set to true. | -| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | -| Default | `false` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_METADATAADDR` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The IP address or domain name of the server that can answer VM queries for cloud-init metadata. In OpenStack, this corresponds to the machine running nova-api (or in Ubuntu, nova-api-metadata). A value of none (case-insensitive) means that Felix should not set up any NAT rule for the metadata path. | +| Schema | String matching regex `^[a-zA-Z0-9_.-]+$` | +| Default | `127.0.0.1` | -#### `PrometheusMetricsHost` +#### `MetadataPort` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ----------------------------------------------------------- | -| Key | `PrometheusMetricsHost` | -| Description | The host that the Prometheus metrics server should bind to. | -| Schema | String matching regex `^[a-zA-Z0-9:._+-]{1,64}$` | -| Default | none | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `MetadataPort` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The port of the metadata server. This, combined with global.MetadataAddr (if not 'None'), is used to set up a NAT rule, from 169.254.169.254:80 to MetadataAddr:MetadataPort. In most cases this should not need to be changed . | +| Schema | Integer: \[0,65535] | +| Default | `8775` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ----------------------------------------------------------- | -| Key | `FELIX_PROMETHEUSMETRICSHOST` | -| Description | The host that the Prometheus metrics server should bind to. | -| Schema | String matching regex `^[a-zA-Z0-9:._+-]{1,64}$` | -| Default | none | +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_METADATAPORT` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The port of the metadata server. This, combined with global.MetadataAddr (if not 'None'), is used to set up a NAT rule, from 169.254.169.254:80 to MetadataAddr:MetadataPort. In most cases this should not need to be changed . | +| Schema | Integer: \[0,65535] | +| Default | `8775` | -#### `PrometheusMetricsKeyFile` +#### `OpenstackRegion` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------- | -| Key | `PrometheusMetricsKeyFile` | -| Description | The path to the TLS private key file for the Prometheus metrics server. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `OpenstackRegion` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The name of the region that a particular Felix belongs to. In a multi-region Calico/OpenStack deployment, this must be configured somehow for each Felix (here in the datamodel, or in felix.cfg or the environment on each compute node), and must match the \[calico] openstack\_region value configured in neutron.conf on each node. | +| Schema | OpenStack region name (must be a valid DNS label) | +| Default | none | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------- | -| Key | `FELIX_PROMETHEUSMETRICSKEYFILE` | -| Description | The path to the TLS private key file for the Prometheus metrics server. | -| Schema | Path to file, which must exist | -| Default | none | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Key | `FELIX_OPENSTACKREGION` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The name of the region that a particular Felix belongs to. In a multi-region Calico/OpenStack deployment, this must be configured somehow for each Felix (here in the datamodel, or in felix.cfg or the environment on each compute node), and must match the \[calico] openstack\_region value configured in neutron.conf on each node. | +| Schema | OpenStack region name (must be a valid DNS label) | +| Default | none | -#### `PrometheusMetricsPort` +#### `ReportingIntervalSecs` **Tab: Configuration file** -| Attribute | Value | -| ----------- | --------------------------------------------------------------- | -| Key | `PrometheusMetricsPort` | -| Description | The TCP port that the Prometheus metrics server should bind to. | -| Schema | Integer: \[0,65535] | -| Default | `9091` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `ReportingIntervalSecs` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The interval at which Felix reports its status into the datastore or 0 to disable. Must be non-zero in OpenStack deployments. | +| Schema | Seconds (floating point) | +| Default | `30` (30s) | **Tab: Environment variable** -| Attribute | Value | -| ----------- | --------------------------------------------------------------- | -| Key | `FELIX_PROMETHEUSMETRICSPORT` | -| Description | The TCP port that the Prometheus metrics server should bind to. | -| Schema | Integer: \[0,65535] | -| Default | `9091` | +| Attribute | Value | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_REPORTINGINTERVALSECS` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The interval at which Felix reports its status into the datastore or 0 to disable. Must be non-zero in OpenStack deployments. | +| Schema | Seconds (floating point) | +| Default | `30` (30s) | -#### `PrometheusProcessMetricsEnabled` +#### `ReportingTTLSecs` **Tab: Configuration file** -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `PrometheusProcessMetricsEnabled` | -| Description | Disables process metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | -| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | -| Default | `true` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `ReportingTTLSecs` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The time-to-live setting for process-wide status reports. | +| Schema | Seconds (floating point) | +| Default | `90` (1m30s) | **Tab: Environment variable** -| Attribute | Value | -| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_PROMETHEUSPROCESSMETRICSENABLED` | -| Description | Disables process metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | -| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | -| Default | `true` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_REPORTINGTTLSECS` | +| Description | **Open source-only parameter**; OpenStack is not supported in Calico Enterprise/Cloud.The time-to-live setting for process-wide status reports. | +| Schema | Seconds (floating point) | +| Default | `90` (1m30s) | -#### `PrometheusWireGuardMetricsEnabled` +### Data plane: XDP acceleration for iptables data plane[​](#data-plane-xdp-acceleration-for-iptables-data-plane) + +#### `GenericXDPEnabled` **Tab: Configuration file** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `PrometheusWireGuardMetricsEnabled` | -| Description | Disables WireGuard metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | -| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | -| Default | `true` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `GenericXDPEnabled` | +| Description | Enables Generic XDP so network cards that don't support XDP offload or driver modes can use XDP. This is not recommended since it doesn't provide better performance than iptables. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Key | `FELIX_PROMETHEUSWIREGUARDMETRICSENABLED` | -| Description | Disables WireGuard metrics collection, which the Prometheus client does by default, when set to false. This reduces the number of metrics reported, reducing Prometheus load. | -| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | -| Default | `true` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_GENERICXDPENABLED` | +| Description | Enables Generic XDP so network cards that don't support XDP offload or driver modes can use XDP. This is not recommended since it doesn't provide better performance than iptables. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | -### Data plane: Common[​](#data-plane-common) +#### `XDPEnabled` -No matching group found for 'Dataplane: Common'. + -### Data plane: iptables[​](#data-plane-iptables) +**Tab: Configuration file** -No matching group found for 'Dataplane: iptables'. +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `XDPEnabled` | +| Description | Enables XDP acceleration for suitable untracked incoming deny rules. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | -### Data plane: nftables[​](#data-plane-nftables) +**Tab: Environment variable** -No matching group found for 'Dataplane: nftables'. +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_XDPENABLED` | +| Description | Enables XDP acceleration for suitable untracked incoming deny rules. | +| Schema | Boolean: `true`, `1`, `yes`, `y`, `t` accepted as True; `false`, `0`, `no`, `n`, `f` accepted (case insensitively) as False. | +| Default | `false` | -### Data plane: eBPF[​](#data-plane-ebpf) + -No matching group found for 'Dataplane: eBPF'. +#### `XDPRefreshInterval` -### Data plane: Windows[​](#data-plane-windows) + -No matching group found for 'Dataplane: Windows'. +**Tab: Configuration file** -### Data plane: OpenStack support[​](#data-plane-openstack-support) +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `XDPRefreshInterval` | +| Description | The period at which Felix re-checks all XDP state to ensure that no other process has accidentally broken Calico's BPF maps or attached programs. Set to 0 to disable XDP refresh. | +| Schema | Seconds (floating point) | +| Default | `90` (1m30s) | -No matching group found for 'Dataplane: OpenStack support'. +**Tab: Environment variable** -### Data plane: XDP acceleration for iptables data plane[​](#data-plane-xdp-acceleration-for-iptables-data-plane) +| Attribute | Value | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_XDPREFRESHINTERVAL` | +| Description | The period at which Felix re-checks all XDP state to ensure that no other process has accidentally broken Calico's BPF maps or attached programs. Set to 0 to disable XDP refresh. | +| Schema | Seconds (floating point) | +| Default | `90` (1m30s) | -No matching group found for 'Dataplane: XDP acceleration for iptables dataplane'. + ### Overlay: VXLAN overlay[​](#overlay-vxlan-overlay) @@ -73884,7 +75809,7 @@ No matching group found for 'Dataplane: XDP acceleration for iptables da | Attribute | Value | | ----------- | -------------------------------------------------------------------- | | Key | `FlowLogsLocalReporter` | -| Description | Configures local Unix socket for reporting flow data from each node. | +| Description | Configures local unix socket for reporting flow data from each node. | | Schema | One of: `Disabled`, `Enabled` (case insensitive) | | Default | `Disabled` | @@ -73893,7 +75818,7 @@ No matching group found for 'Dataplane: XDP acceleration for iptables da | Attribute | Value | | ----------- | -------------------------------------------------------------------- | | Key | `FELIX_FLOWLOGSLOCALREPORTER` | -| Description | Configures local Unix socket for reporting flow data from each node. | +| Description | Configures local unix socket for reporting flow data from each node. | | Schema | One of: `Disabled`, `Enabled` (case insensitive) | | Default | `Disabled` | @@ -77143,6 +79068,84 @@ Then you should observe that the new Calico Enterprise policy is enforced for ne This page lists the specific component versions that go into each release of Calico Enterprise. +## Component versions for Calico Enterprise 3.22.4[​](#component-versions-v3.22.4) + +[Release archive](https://downloads.tigera.io/ee/archives/release-v3.22.4-v1.40.10.tgz) with Kubernetes manifests. Based on Calico v3.31. + +This release comprises the following components, and can be installed using + + + +`quay.io/tigera/operator:v1.40.10` + +| Component | Version | +| ------------------------------ | ------- | +| alertmanager | v3.22.4 | +| calicoctl | v3.22.4 | +| calicoq | v3.22.4 | +| apiserver | v3.22.4 | +| kube-controllers | v3.22.4 | +| manager | v3.22.4 | +| node | v3.22.4 | +| node-windows | v3.22.4 | +| queryserver | v3.22.4 | +| compliance-benchmarker | v3.22.4 | +| compliance-controller | v3.22.4 | +| compliance-reporter | v3.22.4 | +| compliance-server | v3.22.4 | +| compliance-snapshotter | v3.22.4 | +| coreos-alertmanager | v0.32.0 | +| coreos-config-reloader | v0.90.1 | +| coreos-dex | v2.45.1 | +| coreos-fluentd | 1.19.2 | +| upstream-istio | 1.28.1 | +| coreos-prometheus | v3.11.1 | +| coreos-prometheus-operator | v0.90.1 | +| csi | v3.22.4 | +| csi-node-driver-registrar | v3.22.4 | +| deep-packet-inspection | v3.22.4 | +| dex | v3.22.4 | +| dikastes | v3.22.4 | +| eck-elasticsearch | 8.19.14 | +| eck-elasticsearch-operator | 2.16.1 | +| eck-kibana | 8.19.14 | +| egress-gateway | v3.22.4 | +| elastic-tsee-installer | v3.22.4 | +| elasticsearch | v3.22.4 | +| elasticsearch-metrics | v3.22.4 | +| elasticsearch-operator | v3.22.4 | +| envoy | v3.22.4 | +| es-gateway | v3.22.4 | +| firewall-integration | v3.22.4 | +| flexvol | v3.22.4 | +| fluentd | v3.22.4 | +| fluentd-windows | v3.22.4 | +| gateway-api-envoy-gateway | v3.22.4 | +| gateway-api-envoy-proxy | v3.22.4 | +| gateway-api-envoy-ratelimit | v3.22.4 | +| guardian | v3.22.4 | +| ingress-collector | v3.22.4 | +| intrusion-detection-controller | v3.22.4 | +| key-cert-provisioner | v3.22.4 | +| kibana | v3.22.4 | +| l7-admission-controller | v3.22.4 | +| l7-collector | v3.22.4 | +| license-agent | v3.22.4 | +| linseed | v3.22.4 | +| packetcapture | v3.22.4 | +| policy-recommendation | v3.22.4 | +| prometheus | v3.22.4 | +| prometheus-config-reloader | v3.22.4 | +| prometheus-operator | v3.22.4 | +| tigera-cni | v3.22.4 | +| tigera-cni-windows | v3.22.4 | +| tigera-prometheus-service | v3.22.4 | +| typha | v3.22.4 | +| ui-apis | v3.22.4 | +| voltron | v3.22.4 | +| waf-http-filter | v3.22.4 | +| webhooks-processor | v3.22.4 | + ## Component versions for Calico Enterprise 3.22.3[​](#component-versions-v3.22.3) [Release archive](https://downloads.tigera.io/ee/archives/release-v3.22.3-v1.40.9.tgz) with Kubernetes manifests. Based on Calico v3.31. @@ -78230,6 +80233,10 @@ To update an existing installation of Calico Enterprise 3.22, see [Install a pat April 24, 2026 +> **WARNING:** Due to an issue introduced updating Kubernetes dependencies, this patch release is *not recommended for use* if using GitOps tools such as ArgoCD. For ArgoCD, the cluster will become disconnected, disabling all management. The impact on other similar GitOps tools is not known at this time but potentially similar. +> +> Use v3.22.4 instead, which includes a fix for this issue. + #### Enhancements[​](#enhancements-3) - Display the `Degraded` condition's message when running `kubectl get tigerastatus`, making it easier to see error details at a glance without needing to describe the resource. @@ -78302,3 +80309,14 @@ April 24, 2026 - Security updates. To update an existing installation of Calico Enterprise 3.22, see [Install a patch release](https://docs.tigera.io/calico-enterprise/latest/getting-started/manifest-archive). + +### Calico Enterprise 3.22.4 bug fix release[​](#calico-enterprise-3224-bug-fix-release) + +May 6, 2026 + +#### Bug fixes[​](#bug-fixes-5) + +- Fixed and issue that caused the `calico-apiserver` to log `[SHOULD NOT HAPPEN] failed to update managedFields ... no corresponding type for projectcalico.org/v3, Kind=...` on every Calico v3 resource update. +- Restored correct OpenAPI definition names for ArgoCD schema validation. + +To update an existing installation of Calico Enterprise 3.22, see [Install a patch release](https://docs.tigera.io/calico-enterprise/latest/getting-started/manifest-archive). diff --git a/static/calico-enterprise/llms.txt b/static/calico-enterprise/llms.txt index e9904e6985..70e69a34ed 100644 --- a/static/calico-enterprise/llms.txt +++ b/static/calico-enterprise/llms.txt @@ -9,60 +9,60 @@ ## Install and upgrade -- [Install Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/): Install Calico Enterprise on nodes and hosts for popular orchestrators, and install the calicoctl command line interface (CLI) tool. -- [Support and compatibility](https://docs.tigera.io/calico-enterprise/latest/getting-started/compatibility): Lists versions of Calico Enterprise and Kubernetes for each platform. -- [Install on clusters](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/): Install Calico Enterprise on clusters. -- [Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/): Get Calico up and running in your Kubernetes cluster. -- [Quickstart for Calico Enterprise on Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart): Install Calico Enterprise on a single-host Kubernetes cluster for testing or development. -- [Options for installing Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/options-install): Learn about API-driven installation and how to customize your installation configuration. -- [Standard](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/generic-install): Install Calico Enterprise on a kubeadm-provisioned Kubernetes cluster for on-premises deployments. -- [Helm](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/helm): Install Calico Enterprise using Helm application package manager. -- [OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/): Install Calico on OpenShift for networking and network policy. -- [System requirements](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/requirements): Review requirements for using OpenShift with Calico Enterprise. -- [Install Calico Enterprise on OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/installation): Install Calico Enterprise on an OpenShift 4 cluster. -- [Install Calico Enterprise on an OpenShift HCP cluster](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/hostedcontrolplanes): Install Calico Enterprise on an OpenShift Hosted Control Planes (HCP) cluster. -- [Install Calico Enterprise on a Red Hat OpenShift on AWS (ROSA) cluster](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/rosa): Install Calico Enterprise on a Red Hat OpenShift on AWS (ROSA) cluster. -- [Microsoft Azure Kubernetes Service (AKS)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aks): Install Calico Enterprise for an AKS cluster. -- [Amazon Elastic Kubernetes Service (EKS)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/eks): Enable Calico network policy in EKS. -- [Google Kubernetes Engine (GKE)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/gke): Enable Calico network policy in GKE. -- [kOps on AWS](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aws): Install Calico Enterprise with a self-managed Kubernetes cluster using kOps on AWS. -- [Mirantis Kubernetes Engine (MKE 3)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/docker-enterprise): Install Calico Enterprise on an MKE 3 cluster. +- [Install Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/): Install Calico Enterprise on Kubernetes, OpenShift, or bare-metal hosts. Includes guidance on installing the calicoctl command-line tool. +- [Support and compatibility](https://docs.tigera.io/calico-enterprise/latest/getting-started/compatibility): Supported combinations of Calico Enterprise, Kubernetes, OpenShift, and host platforms for each Calico Enterprise release. +- [Install on clusters](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/): Pick an installation path for Calico Enterprise on a Kubernetes or OpenShift cluster — covers managed cloud, self-managed, and air-gapped scenarios. +- [Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/): Pick a Kubernetes installation path for Calico Enterprise — covers Helm, kubeadm, and the API-driven Installation resource. +- [Quickstart for Calico Enterprise on Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart): Stand up Calico Enterprise on a single-host Kubernetes cluster in about an hour for testing, demos, or development — not intended for production. +- [Options for installing Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/options-install): Customize a Calico Enterprise installation by editing the Installation resource — IP pools, MTU, registries, BGP, and operator behavior. +- [Standard](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/generic-install): Install Calico Enterprise on a kubeadm-provisioned Kubernetes cluster running on-premises hardware or VMs. +- [Helm](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/helm): Install Calico Enterprise on a Kubernetes cluster using the Helm 3 package manager. +- [OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/): Install Calico Enterprise on OpenShift 4 — covers requirements, the operator-based install path, and ROSA and Hosted Control Planes variants. +- [System requirements](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/requirements): Cluster, OpenShift, and host OS requirements you must meet before installing Calico Enterprise on an OpenShift 4 cluster. +- [Install Calico Enterprise on OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/installation): Install Calico Enterprise on a self-managed OpenShift 4 cluster using the Tigera Operator. +- [Install Calico Enterprise on an OpenShift HCP cluster](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/hostedcontrolplanes): Install Calico Enterprise on an OpenShift Hosted Control Planes (HCP) cluster, where the control plane is managed and the data plane runs on user-owned nodes. +- [Install Calico Enterprise on a Red Hat OpenShift on AWS (ROSA) cluster](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/openshift/rosa): Install Calico Enterprise on a Red Hat OpenShift Service on AWS (ROSA) cluster. +- [Microsoft Azure Kubernetes Service (AKS)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aks): Install Calico Enterprise on an Azure Kubernetes Service (AKS) cluster, including the steps that differ from a self-managed install. +- [Amazon Elastic Kubernetes Service (EKS)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/eks): Install the full Calico Enterprise stack — including observability, threat defense, and tiered policy — on an Amazon EKS cluster. +- [Google Kubernetes Engine (GKE)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/gke): Install the full Calico Enterprise stack — including observability, threat defense, and tiered policy — on a Google Kubernetes Engine (GKE) cluster. +- [kOps on AWS](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/aws): Install Calico Enterprise on a self-managed Kubernetes cluster provisioned with kOps on Amazon Web Services. +- [Mirantis Kubernetes Engine (MKE 3)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/docker-enterprise): Install Calico Enterprise on a Mirantis Kubernetes Engine (MKE) 3 cluster. - [Install Calico Enterprise on Mirantis Kubernetes Engine 4k](https://docs.tigera.io/calico-enterprise/latest/installation/install-calico-enterprise-mke-4k): Draft installation guide for Mirantis Kubernetes Engine 4k -- [Rancher Kubernetes Engine (RKE)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher): Install Calico Enterprise on RKE. -- [RKE2](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rke2): Install Calico Enterprise on an RKE2 cluster. -- [Rancher UI](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher-ui): Install Calico Enterprise on a RKE2 cluster using the Rancher UI. -- [Tanzu Kubernetes Grid (TKG)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/tkg): Install Calico Enterprise on Tanzu Kubernetes Grid. -- [Install Calico Enterprise on a Charmed Kubernetes cluster](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/charmed-k8s): Install Calico Enterprise on a Charmed Kubernetes cluster. -- [Calico Enterprise for Windows](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/): Install and configure Calico Enterprise for Windows. -- [Limitations and known issues](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/limitations): Review limitations before starting installation. -- [Requirements](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/requirements): Review requirements for Calico Enterprise for Windows. -- [Install using Operator](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/operator): Install Calico Enterprise for Windows on a Kubernetes cluster for testing or development. -- [Install Calico Enterprise for Windows on RKE](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/rancher): Install Calico Enterprise for Windows on RKE. -- [Basic policy demo](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/demo): An interactive demo to show how to apply basic network policy to pods in a Calico Enterprise for Windows cluster. -- [Configure flow logs for workloads](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/flowlogs): Configure flow logs for Calico Enterprise for Windows workloads. -- [Configure DNS policy for workloads](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/dnspolicy): Configure DNS policy for Calico Enterprise for Windows workloads. -- [Troubleshoot Calico Enterprise for Windows](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/troubleshoot): Help for troubleshooting Calico Enterprise for Windows issues. -- [Install from a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/): Install Calico Enterprise using a private registry. -- [Install from a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular): Install and configure Calico Enterprise in a private registry. -- [Install from an image path in a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-image-path): Install and configure Calico Enterprise using an image path in a private registry. -- [Get a license](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/calico-enterprise): Get a license to install Calico Enterprise. -- [System requirements](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/requirements): Review requirements to install Calico Enterprise networking and network policy. -- [Non-cluster hosts](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/): Install Calico Enterprise on hosts to secure host communications. -- [Install Calico on non-cluster hosts and VMs](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/about): Install Calico on non-cluster hosts and VMs -- [Use custom certificates for Node and Typha](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/typha-node-tls): Use custom TLS certificates for non-cluster Calico Node and Typha -- [Troubleshoot non-cluster hosts and VMs setup](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/troubleshoot): Troubleshoot non-cluster hosts and VMs setup -- [Upgrade](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/): Upgrade to a newer version of Calico. -- [Upgrade Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/): Upgrade to a newer version of Calico Enterprise. -- [Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/): Upgrade from an earlier release of Calico Enterprise using Kubernetes. -- [Upgrade Calico Enterprise installed with Helm](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm): Upgrade to a newer version of Calico Enterprise installed with Helm. -- [Upgrade Calico Enterprise installed with the operator](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator): Upgrading from an earlier release of Calico Enterprise with the operator. -- [Upgrade Calico Enterprise installed with OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/openshift-upgrade): Upgrade to a newer version of Calico Enterprise installed with OpenShift. -- [Upgrade from Calico to Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/): Upgrade from Calico to Calico Enterprise. -- [Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/): Upgrade to Calico Enterprise from Calico installed with Helm. -- [Upgrade from Calico to Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/standard): Steps to upgrade from open source Calico to Calico Enterprise. -- [Upgrade Calico to Calico Enterprise installed with Helm](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/helm): Upgrade to Calico Enterprise from Calico installed with Helm. -- [Upgrade from Calico to Calico Enterprise on OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee-openshift): Steps to upgrade from open source Calico to Calico Enterprise on OpenShift. -- [Install a patch release](https://docs.tigera.io/calico-enterprise/latest/getting-started/manifest-archive): Install an older patch release of Calico Enterprise. +- [Rancher Kubernetes Engine (RKE)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher): Install Calico Enterprise on a Rancher Kubernetes Engine (RKE) cluster. +- [RKE2](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rke2): Install Calico Enterprise on an RKE2 cluster using the standard command-line installer. +- [Rancher UI](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/rancher-ui): Install Calico Enterprise on an RKE2 cluster from the Rancher UI rather than the command line. +- [Tanzu Kubernetes Grid (TKG)](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/tkg): Install Calico Enterprise on a VMware Tanzu Kubernetes Grid (TKG) cluster. +- [Install Calico Enterprise on a Charmed Kubernetes cluster](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/charmed-k8s): Install Calico Enterprise on a Canonical Charmed Kubernetes cluster. +- [Calico Enterprise for Windows](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/): Install and configure Calico Enterprise for Windows — covers requirements, supported platforms, and Windows-node install paths. +- [Limitations and known issues](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/limitations): Known limitations of Calico Enterprise for Windows that you should review before planning an installation. +- [Requirements](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/requirements): Cluster and Windows host requirements you must meet before installing Calico Enterprise for Windows. +- [Install using Operator](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/operator): Install Calico Enterprise for Windows on a Kubernetes cluster using the operator, for testing or development. +- [Install Calico Enterprise for Windows on RKE](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/rancher): Install Calico Enterprise for Windows on a Rancher Kubernetes Engine (RKE) cluster with Windows worker nodes. +- [Basic policy demo](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/demo): Interactive demo that applies basic Calico Enterprise network policy to pods running on a Windows node. +- [Configure flow logs for workloads](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/flowlogs): Configure flow logs for Calico Enterprise for Windows workloads so traffic activity is captured for observability and forensics. +- [Configure DNS policy for workloads](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/dnspolicy): Configure DNS policy for Calico Enterprise for Windows workloads to control egress to external services by hostname. +- [Troubleshoot Calico Enterprise for Windows](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/windows-calico/troubleshoot): Troubleshooting guide for Calico Enterprise for Windows clusters — common issues, diagnostic steps, and where to look for logs. +- [Install from a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/): Install Calico Enterprise from a private container registry — for air-gapped clusters or environments that pull all images internally. +- [Install from a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-regular): Install Calico Enterprise from a private container registry using the standard image paths. +- [Install from an image path in a private registry](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/private-registry/private-registry-image-path): Install Calico Enterprise from a private registry that uses a non-default image path or repository structure. +- [Get a license](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/calico-enterprise): How to obtain a Calico Enterprise license file before starting an installation. +- [System requirements](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/requirements): Cluster, host, and platform requirements you must meet before installing Calico Enterprise networking and network policy. +- [Non-cluster hosts](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/): Install Calico Enterprise on bare-metal hosts and VMs to extend zero-trust policy enforcement beyond a Kubernetes cluster. +- [Install Calico on non-cluster hosts and VMs](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/about): Install Calico Enterprise on non-cluster hosts and VMs to apply Calico network policy and capture flow logs for workloads running outside Kubernetes. +- [Use custom certificates for Node and Typha](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/typha-node-tls): Configure custom TLS certificates between non-cluster Calico Enterprise nodes and Typha for clusters with strict PKI requirements. +- [Troubleshoot non-cluster hosts and VMs setup](https://docs.tigera.io/calico-enterprise/latest/getting-started/bare-metal/troubleshoot): Troubleshooting guide for Calico Enterprise on non-cluster hosts and VMs — connectivity, agent registration, and policy issues. +- [Upgrade](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/): Pick an upgrade path for Calico Enterprise — covers migrations from Calico Open Source and version-to-version Calico Enterprise upgrades. +- [Upgrade Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/): Upgrade an existing Calico Enterprise cluster to a newer version — pick a path based on the install method and platform. +- [Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/): Upgrade an existing Calico Enterprise installation on a Kubernetes cluster to a newer version. +- [Upgrade Calico Enterprise installed with Helm](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm): Upgrade a Helm-installed Calico Enterprise cluster on Kubernetes to a newer version. +- [Upgrade Calico Enterprise installed with the operator](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator): Upgrade an operator-installed Calico Enterprise cluster on Kubernetes to a newer version. +- [Upgrade Calico Enterprise installed with OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-enterprise/openshift-upgrade): Upgrade an existing Calico Enterprise installation on an OpenShift 4 cluster to a newer version. +- [Upgrade from Calico to Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/): Upgrade from Calico Open Source to Calico Enterprise — covers the supported install methods and what changes during the migration. +- [Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/): Upgrade from Calico Open Source to Calico Enterprise on a Kubernetes cluster — pick a path based on the original install method. +- [Upgrade from Calico to Calico Enterprise](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/standard): Upgrade from an operator-installed Calico Open Source cluster to Calico Enterprise on Kubernetes. +- [Upgrade Calico to Calico Enterprise installed with Helm](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee/helm): Upgrade from a Helm-installed Calico Open Source cluster to Calico Enterprise on Kubernetes. +- [Upgrade from Calico to Calico Enterprise on OpenShift](https://docs.tigera.io/calico-enterprise/latest/getting-started/upgrading/upgrading-calico-to-calico-enterprise/upgrade-to-tsee-openshift): Upgrade from Calico Open Source to Calico Enterprise on an OpenShift 4 cluster. +- [Install a patch release](https://docs.tigera.io/calico-enterprise/latest/getting-started/manifest-archive): Install an older patch release of Calico Enterprise from the manifest archive when an upgrade to the latest is not yet possible. ## Networking @@ -110,61 +110,61 @@ ## Network policy -- [Network policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/): Calico Enterprise Network Policy and Calico Enterprise Global Network Policy are the fundamental resources to secure workloads and hosts, and to adopt a zero trust security model. -- [Policy recommendations](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/): Enable policy recommendations for namespaces to improve your security posture. -- [Enable policy recommendations](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/policy-recommendations): Enable continuous policy recommendations to secure unprotected namespaces or workloads. -- [Policy recommendations tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/learn-about-policy-recommendations): Policy recommendations tutorial. -- [Policy best practices](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-best-practices): Learn policy best practices for security, scalability, and performance. -- [Policy tiers](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/): Learn how policy tiers allow diverse teams to securely manage Kubernetes policy. -- [Get started with policy tiers](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/tiered-policy): Understand how tiered policy works and supports microsegmentation. -- [Change allow-tigera tier behavior](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/allow-tigera): Understand how to change the behavior of the allow-tigera tier. -- [Network policy tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/policy-tutorial-ui): Covers the basics of Calico Enterprise network policy. -- [Configure RBAC for tiered policies](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/rbac-tiered-policies): Configure RBAC to control access to policies and tiers. -- [Get started with network sets](https://docs.tigera.io/calico-enterprise/latest/network-policy/networksets): Learn the power of network sets and why you should create them. -- [Global default deny policy best practices](https://docs.tigera.io/calico-enterprise/latest/network-policy/default-deny): Implement a global default deny policy in the default tier to block unwanted traffic. -- [Stage, preview impacts, and enforce policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/staged-network-policies): Stage and preview policies to observe traffic implications before enforcing them. -- [Troubleshoot policies](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-troubleshooting): Common policy implementation problems. -- [Calico Enterprise network policy for beginners](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/): Learn how to create your first Calico Enterprise network policy. -- [Enable a default deny policy for Kubernetes pods](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/kubernetes-default-deny): Create a default deny network policy so pods that are missing policy are not allowed traffic until appropriate network policy is defined. -- [Get started with Calico network policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-network-policy): Create your first Calico network policies. Shows the rich features using sample policies that extend native Kubernetes network policy. -- [Calico Enterprise automatic labels](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-labels): Calico Enterprise automatic labels for use with resources. -- [Calico Enterprise for Kubernetes demo](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/simple-policy-cnx): Learn the extra features for Calico Enterprise that make it so important for production environments. -- [Policy rules](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/): Control traffic to/from endpoints using Calico network policy rules. -- [Basic rules](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/policy-rules-overview): Define network connectivity for Calico endpoints using policy rules and label selectors. -- [Use namespace rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/namespace-policy): Use namespaces and namespace selectors in Calico network policy to group or separate resources. Use network policies to allow or deny traffic to/from pods that belong to specific namespaces. -- [Use service accounts rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-accounts): Use Kubernetes service accounts in policies to validate cryptographic identities and/or manage RBAC controlled high-priority rules across teams. -- [Use service rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-policy): Use Kubernetes Service names in policy rules. -- [Use external IPs or networks rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/external-ips-policy): Limit egress and ingress traffic using IP address either directly within Calico network policy or managed as Calico network sets. -- [Use ICMP/ping rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/icmp-ping): Control where ICMP/ping is used by creating a Calico network policy to allow and deny ICMP/ping messages for workloads and host endpoints. -- [Policy for Kubernetes services](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/): Apply Calico policy to Kubernetes node ports, and to services that are exposed externally as cluster IPs. -- [Apply Calico Enterprise policy to Kubernetes node ports](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/kubernetes-node-ports): Restrict access to Kubernetes node ports using Calico Enterprise global network policy. Follow the steps to secure the host, the node ports, and the cluster. -- [Apply Calico Enterprise policy to services exposed externally as cluster IPs](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/services-cluster-ips): Expose Kubernetes service cluster IPs over BGP using Calico Enterprise, and restrict who can access them using Calico Enterprise network policy. -- [DNS policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/domain-based-policy): Use domain names to allow traffic to destinations outside of a cluster by their DNS names instead of by their IP addresses. -- [Application layer policies to control ingress traffic](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/): Use application layer policies to restrict ingress traffic based on HTTP attributes. -- [Enable and enforce application layer policies](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp): Enforce application layer policies in your cluster to configure access controls based on L7 attributes. -- [Application layer policy tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp-tutorial): Learn how to apply ALP to your workloads and control ingress traffic. -- [Policy for firewalls](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/): Use Calico Enterprise policy with existing firewalls. -- [Fortinet firewall integrations](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/): Calico Enterprise Fortinet firewall integrations. -- [Determine the best Calico Enterprise/Fortinet solution](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/overview): Learn how to integrate Kubernetes clusters with existing Fortinet firewall workflows using Calico Enterprise. -- [Extend Kubernetes to Fortinet firewall devices](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/firewall-integration): Enable FortiGate firewalls to control traffic from Kubernetes workloads. -- [Extend FortiManager firewall policies to Kubernetes](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration): Extend FortiManager firewall policies to Kubernetes with Calico Enterprise -- [Policy for hosts and VMs](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/): Use the same Calico network policy for workloads to restrict traffic between hosts and the outside world. -- [Protect hosts and VMs](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts): Create Calico Enterprise network policies to restrict traffic to/from hosts. -- [Protect Kubernetes nodes](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/kubernetes-nodes): Protect Kubernetes nodes with host endpoints managed by Calico Enterprise. -- [Protect hosts tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts-tutorial): Learn how to secure incoming traffic from outside the cluster using Calico host endpoints with network policy, including allowing controlled access to specific Kubernetes services. -- [Apply policy to forwarded traffic](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/host-forwarded-traffic): Apply Calico Enterprise network policy to traffic being forward by hosts acting as routers or NAT gateways. -- [Policy for extreme traffic](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/): Use Calico network policy early in the Linux packet processing pipeline to handle extreme traffic scenarios. -- [Enable extreme high-connection workloads](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/high-connection-workloads): Create a Calico network policy rule to bypass Linux conntrack for traffic to workloads that experience extremely large number of connections. -- [Defend against DoS attacks](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/defend-dos-attack): Define DoS mitigation rules in Calico Enterprise policy to quickly drop connections when under attack. Learn how rules use eBPF and XDP, including hardware offload when available. -- [Get started with policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/): If you are new to Kubernetes, start with "Kubernetes policy" and learn the basics of enforcing policy for pod traffic. Otherwise, dive in and create more powerful policies with Calico policy. The good news is, Kubernetes and Calico policies are very similar and work alongside each other -- so managing both types is easy. -- [What is network policy?](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-network-policy): Learn the basics of Kubernetes and Calico Enterprise network policy -- [Get started with Kubernetes network policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-network-policy): Learn Kubernetes policy syntax, rules, and features for controlling network traffic. -- [Kubernetes policy, demo](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-demo): An interactive demo that visually shows how applying Kubernetes policy allows and denies connections. -- [Kubernetes policy, basic tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-policy-basic): Learn how to use basic Kubernetes network policy to securely restrict traffic to/from pods. -- [Kubernetes policy, advanced tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-policy-advanced): Learn how to create more advanced Kubernetes network policies (namespace, allow and deny all ingress and egress). -- [Kubernetes services](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-services): Learn the three main service types and how to use them. -- [Kubernetes ingress](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-ingress): Learn the different ingress implementations and how ingress and policy interact. -- [Kubernetes egress](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-egress): Learn why you should restrict egress traffic and how to do it. +- [Network policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/): Secure Kubernetes workloads and hosts with Calico Enterprise network policy — extends Kubernetes NetworkPolicy with tiers, recommendations, and observability. +- [Policy recommendations](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/): Use Calico Enterprise policy recommendations to generate baseline network policy for unprotected namespaces from observed flow logs. +- [Enable policy recommendations](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/policy-recommendations): Run continuous Calico Enterprise policy recommendations so unprotected namespaces and workloads pick up baseline policy automatically. +- [Policy recommendations tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/recommendations/learn-about-policy-recommendations): Tutorial walkthrough of the Calico Enterprise policy recommendations engine — what it generates, how to review it, and how to promote it to enforced. +- [Policy best practices](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-best-practices): Best practices for Calico Enterprise policy — security posture, scalability with tiers, and performance tuning under load. +- [Policy tiers](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/): Use Calico Enterprise policy tiers to let platform, security, and app teams author and order policy independently within shared clusters. +- [Get started with policy tiers](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/tiered-policy): How tiered policy works in Calico Enterprise — evaluation order, pass actions, and using tiers to enforce microsegmentation across teams. +- [Change allow-tigera tier behavior](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/allow-tigera): Customize the behavior of the allow-tigera tier that Calico Enterprise installs by default to keep its own components reachable. +- [Network policy tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/policy-tutorial-ui): Tutorial for the Calico Enterprise policy management UI — author, order, and stage policies inside tiers from the web console. +- [Configure RBAC for tiered policies](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/rbac-tiered-policies): Configure Kubernetes RBAC to control which users can edit Calico Enterprise policies in each tier. +- [Get started with network sets](https://docs.tigera.io/calico-enterprise/latest/network-policy/networksets): Use Calico Enterprise network sets to package frequently reused IP ranges or domains into named selectors that policies can reference. +- [Global default deny policy best practices](https://docs.tigera.io/calico-enterprise/latest/network-policy/default-deny): Deploy a global default-deny policy in the Calico Enterprise default tier so unprotected workloads are blocked until policy is written. +- [Stage, preview impacts, and enforce policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/staged-network-policies): Stage and preview Calico Enterprise network policies in the management UI to observe traffic impact before enforcing. +- [Troubleshoot policies](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-troubleshooting): Troubleshooting guide for common Calico Enterprise policy implementation problems — denied traffic, missing rules, and tier-evaluation surprises. +- [Calico Enterprise network policy for beginners](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/): Beginner-friendly path for writing your first Calico Enterprise network policies — a tour of the basic resource types and rule patterns. +- [Enable a default deny policy for Kubernetes pods](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/kubernetes-default-deny): Apply a default-deny network policy in a Calico Enterprise cluster so unprotected pods are denied traffic until explicit policy is written. +- [Get started with Calico network policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-network-policy): Write your first Calico Enterprise NetworkPolicy — sample policies that exercise the rich rule features beyond Kubernetes NetworkPolicy. +- [Calico Enterprise automatic labels](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/calico-labels): Reference list of automatic labels Calico Enterprise attaches to resources, useful as selectors in policy rules. +- [Calico Enterprise for Kubernetes demo](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/simple-policy-cnx): Tour of the additional features Calico Enterprise adds to Kubernetes policy that make it suitable for production environments. +- [Policy rules](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/): Control traffic to and from endpoints using Calico Enterprise network policy rules — selectors, actions, and egress/ingress directions. +- [Basic rules](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/policy-rules-overview): How to write policy rules in Calico Enterprise — label selectors, source and destination match criteria, and rule actions. +- [Use namespace rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/namespace-policy): Group or separate workloads in Calico Enterprise policy using namespaces and namespace selectors so policies apply only to specified namespaces. +- [Use service accounts rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-accounts): Match on Kubernetes service accounts in Calico Enterprise policy rules to validate workload identity and apply RBAC-controlled rules. +- [Use service rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/service-policy): Match on Kubernetes Service names in Calico Enterprise policy rules instead of specific pod selectors. +- [Use external IPs or networks rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/external-ips-policy): Restrict egress and ingress to specific IP ranges in Calico Enterprise policy, either inline or via reusable network sets. +- [Use ICMP/ping rules in policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/policy-rules/icmp-ping): Allow or deny ICMP and ping traffic for Calico Enterprise workloads and host endpoints using policy rules. +- [Policy for Kubernetes services](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/): Apply Calico Enterprise policy to Kubernetes Services — node ports, ClusterIPs, and externally exposed services. +- [Apply Calico Enterprise policy to Kubernetes node ports](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/kubernetes-node-ports): Restrict access to Kubernetes NodePort services using a Calico Enterprise GlobalNetworkPolicy at the host endpoint. +- [Apply Calico Enterprise policy to services exposed externally as cluster IPs](https://docs.tigera.io/calico-enterprise/latest/network-policy/beginners/services/services-cluster-ips): Expose Kubernetes Service ClusterIPs over BGP using Calico Enterprise and restrict who can reach them with network policy. +- [DNS policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/domain-based-policy): Allow traffic to external destinations by DNS name using Calico Enterprise domain-based policy rules — without maintaining static IP lists. +- [Application layer policies to control ingress traffic](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/): Restrict ingress traffic to Calico Enterprise workloads by HTTP method, path, or other Layer-7 attributes using application-layer policy. +- [Enable and enforce application layer policies](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp): Configure access controls based on Layer-7 attributes by enforcing Calico Enterprise application-layer policy in the cluster. +- [Application layer policy tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/application-layer-policies/alp-tutorial): Step-by-step tutorial for applying Calico Enterprise application-layer policy to workloads — control ingress traffic by HTTP attributes. +- [Policy for firewalls](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/): Integrate Calico Enterprise policy with existing perimeter firewalls — extend rule scope from Kubernetes workloads out to the network edge. +- [Fortinet firewall integrations](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/): Calico Enterprise integrations with Fortinet firewalls — FortiGate for traffic enforcement and FortiManager for policy management. +- [Determine the best Calico Enterprise/Fortinet solution](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/overview): Integrate Kubernetes clusters with existing Fortinet firewall workflows using Calico Enterprise — architecture, components, and what each side enforces. +- [Extend Kubernetes to Fortinet firewall devices](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/firewall-integration): Use a FortiGate firewall to control egress traffic from Kubernetes workloads in a Calico Enterprise cluster. +- [Extend FortiManager firewall policies to Kubernetes](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration): Extend FortiManager firewall policies into Kubernetes workloads in a Calico Enterprise cluster. +- [Policy for hosts and VMs](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/): Apply Calico Enterprise network policy to host interfaces so the same selector-based model protects bare-metal hosts and VMs alongside pods. +- [Protect hosts and VMs](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts): Protect Kubernetes hosts and bare-metal nodes with Calico Enterprise policy by writing rules that target host endpoints. +- [Protect Kubernetes nodes](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/kubernetes-nodes): Protect Kubernetes node interfaces with Calico Enterprise host endpoints to extend network policy to the node itself. +- [Protect hosts tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/protect-hosts-tutorial): Tutorial for protecting hosts in a Calico Enterprise cluster — register host endpoints, write rules, and allow controlled access to specific Kubernetes services. +- [Apply policy to forwarded traffic](https://docs.tigera.io/calico-enterprise/latest/network-policy/hosts/host-forwarded-traffic): Apply Calico Enterprise network policy to traffic forwarded through hosts acting as routers or NAT gateways. +- [Policy for extreme traffic](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/): Apply Calico Enterprise network policy early in the Linux packet-processing pipeline to handle DoS, high-connection, and other extreme traffic scenarios. +- [Enable extreme high-connection workloads](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/high-connection-workloads): Bypass Linux conntrack with a Calico Enterprise policy rule for workloads that handle an extreme number of concurrent connections. +- [Defend against DoS attacks](https://docs.tigera.io/calico-enterprise/latest/network-policy/extreme-traffic/defend-dos-attack): Define DoS mitigation rules in Calico Enterprise policy that drop connections at the eBPF or XDP layer, with hardware offload when available. +- [Get started with policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/): Pick a learning path for Calico Enterprise policy — start with Kubernetes-native NetworkPolicy basics or jump to the richer enterprise resources that build on top. +- [What is network policy?](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-network-policy): Concepts you need before writing Calico Enterprise policy — how Kubernetes NetworkPolicy, Calico policy, and tiers interact. +- [Get started with Kubernetes network policy](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-network-policy): Reference for Kubernetes NetworkPolicy syntax, rules, and features when used with the Calico Enterprise enforcement engine. +- [Kubernetes policy, demo](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-demo): Interactive demo for a Calico Enterprise cluster that visualizes how Kubernetes NetworkPolicy allows and denies connections between pods. +- [Kubernetes policy, basic tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-policy-basic): Apply your first Kubernetes NetworkPolicy in a Calico Enterprise cluster to restrict ingress and egress traffic to and from pods. +- [Kubernetes policy, advanced tutorial](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/kubernetes-policy-advanced): Write more advanced Kubernetes NetworkPolicy resources in a Calico Enterprise cluster — namespace scoping, allow-all, and deny-all variants. +- [Kubernetes services](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-services): How the three Kubernetes Service types behave in a Calico Enterprise cluster and where each one shows up in policy. +- [Kubernetes ingress](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-ingress): How different Kubernetes ingress implementations interact with Calico Enterprise network policy at the cluster edge. +- [Kubernetes egress](https://docs.tigera.io/calico-enterprise/latest/network-policy/get-started/about-kubernetes-egress): Why egress traffic from Kubernetes workloads matters and how to restrict it with Calico Enterprise policy. ## Observability diff --git a/static/calico/llms-full.txt b/static/calico/llms-full.txt index 7d43f20489..b1c06847db 100644 --- a/static/calico/llms-full.txt +++ b/static/calico/llms-full.txt @@ -1170,157 +1170,157 @@ Quickstart tutorials and guides for installing Calico on Kubernetes, OpenStack, ##### [System requirements](https://docs.tigera.io/calico/latest/getting-started/kubernetes/requirements) -[Review requirements before installing Calico to ensure success.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/requirements) +[Cluster, kernel, and platform requirements you must meet before installing Calico Open Source on Kubernetes.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/requirements) ##### [Calico quickstart guide](https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart) -[Quickstart for Calico.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart) +[Install Calico Open Source on a single-host Kubernetes cluster in roughly 15 minutes — the standard starter path for trying Calico networking and network policy on a development machine.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart) ##### [Community-tested Kubernetes versions](https://docs.tigera.io/calico/latest/getting-started/kubernetes/community-tested) -[Provides community inputs on what versions of Kubernetes and platforms work with Calico.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/community-tested) +[Community-reported compatibility data for Calico Open Source across Kubernetes versions, distributions, and host platforms.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/community-tested) ## Installing[​](#installing) ##### [Installing on on-premises deployments](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises) -[Install Calico networking and network policy for on-premises deployments.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises) +[Install Calico Open Source networking and network policy on a self-managed Kubernetes cluster running on-premises hardware.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises) ##### [Install Calico for policy and flannel (aka Canal) for networking](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/install-for-flannel) -[If you use flannel for networking, you can install Calico network policy to secure cluster communications.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/install-for-flannel) +[Install Calico Open Source network policy on an existing Flannel-networked cluster without replacing the data plane.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/install-for-flannel) ##### [Installing on RKE](https://docs.tigera.io/calico/latest/getting-started/kubernetes/rancher) -[Install Calico on a Rancher Kubernetes Engine cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/rancher) +[Install Calico Open Source on a Rancher Kubernetes Engine cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/rancher) ##### [System requirements](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/requirements) -[Review the requirements for using OpenShift with Calico.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/requirements) +[Cluster, OpenShift, and host OS requirements you must meet before installing Calico Open Source on an OpenShift 4 cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/requirements) ##### [Install an OpenShift 4 cluster with Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/installation) -[Install Calico on an OpenShift 4 cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/installation) +[Install Calico Open Source on a self-managed OpenShift 4 cluster using the operator-based installation flow.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/installation) ##### [Quickstart for Calico on K3s](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/quickstart) -[Install Calico on a single-node K3s cluster for testing or development in under 5 minutes.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/quickstart) +[Quickstart that installs Calico Open Source on a single-node K3s cluster in roughly 5 minutes for testing or development.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/quickstart) ##### [K3s multi-node install](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/multi-node-install) -[Install Calico on a multi node K3s cluster for testing or development.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/multi-node-install) +[Install Calico Open Source on a multi-node K3s cluster for testing or development workloads.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/multi-node-install) ## Installing on cloud infrastructure[​](#installing-on-cloud-infrastructure) ##### [Installing on EKS](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/eks) -[Enable Calico network policy in EKS.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/eks) +[Add Calico Open Source network policy to an Amazon EKS cluster running the AWS VPC CNI, without replacing the cluster's networking data plane.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/eks) ##### [Installing on GKE](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/gke) -[Enable Calico network policy in GKE.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/gke) +[Add Calico Open Source network policy to a Google Kubernetes Engine (GKE) cluster on top of the built-in GKE networking.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/gke) ##### [Installing on IKS](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/iks) -[Use IKS with built-in support for Calico networking and network policy.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/iks) +[IBM Cloud Kubernetes Service (IKS) ships with Calico Open Source as the built-in networking and policy engine — what is included and how to use it.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/iks) ##### [Installing on AKS](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/aks) -[Enable Calico network policy in AKS.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/aks) +[Add Calico Open Source network policy to an Azure Kubernetes Service (AKS) cluster running the Azure CNI.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/aks) ##### [Installing on AWS](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/aws) -[Use Calico with a self-managed Kubernetes cluster in Amazon Web Services (AWS).](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/aws) +[Run Calico Open Source on a self-managed Kubernetes cluster in Amazon Web Services (AWS) — what to know about VPC sizing, MTU, and source/dest checks.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/aws) ##### [Installing on GCE](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/gce) -[Use Calico with a self-managed Kubernetes cluster in Google Compute Engine (GCE).](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/gce) +[Run Calico Open Source on a self-managed Kubernetes cluster in Google Compute Engine (GCE) — what to know about IP forwarding, MTU, and route limits.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/gce) ##### [Installing on Azure](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/azure) -[Use Calico with a self-managed Kubernetes cluster in Microsoft Azure.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/azure) +[Run Calico Open Source on a self-managed Kubernetes cluster in Microsoft Azure — what to know about VNet routing, UDR limits, and IPAM choices.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/azure) ##### [Installing on Digital Ocean](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/do) -[Use Calico with a self-managed Kubernetes cluster in DigitalOcean (DO).](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/do) +[Run Calico Open Source on a self-managed Kubernetes cluster in DigitalOcean — what to know about MTU, droplet networking, and floating IPs.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/do) ## Calico for Windows[​](#calico-for-windows) ##### [Limitations and known issues](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/limitations) -[Review limitations before starting installation.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/limitations) +[Known limitations of Calico Open Source for Windows that you should review before planning an installation.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/limitations) ##### [Requirements](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/requirements) -[Review the requirements for Calico for Windows.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/requirements) +[Cluster and Windows host requirements you must meet before installing Calico Open Source for Windows.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/requirements) ##### [Install using Operator](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/operator) -[Install Calico for Windows on a Kubernetes cluster for testing or development.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/operator) +[Install Calico Open Source for Windows on a Kubernetes cluster using the operator, for testing or development.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/operator) ##### [Calico for Windows on a Rancher Kubernetes Engine cluster](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/rancher) -[Install Calico for Windows on a Rancher RKE cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/rancher) +[Install Calico Open Source for Windows on a Rancher RKE cluster with Windows worker nodes.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/rancher) ##### [Basic policy demo](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/demo) -[An interactive demo to show how to apply basic network policy to pods in a Calico for Windows cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/demo) +[Interactive demo that applies basic Calico Open Source network policy to pods running on a Windows node.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/demo) ##### [Troubleshoot Calico for Windows](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/troubleshoot) -[Help for troubleshooting Calico for Windows issues in Calico this release.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/troubleshoot) +[Troubleshooting guide for Calico Open Source for Windows clusters — common issues, diagnostic steps, and where to look for logs.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/troubleshoot) ## OpenStack[​](#openstack) ##### [Calico for OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/overview) -[Review the Calico components used in an OpenStack deployment.](https://docs.tigera.io/calico/latest/getting-started/openstack/overview) +[Components and topology used when running Calico Open Source as the networking and policy layer for an OpenStack deployment.](https://docs.tigera.io/calico/latest/getting-started/openstack/overview) ##### [System requirements](https://docs.tigera.io/calico/latest/getting-started/openstack/requirements) -[Requirements for installing Calico on OpenStack nodes.](https://docs.tigera.io/calico/latest/getting-started/openstack/requirements) +[Hypervisor, OS, and OpenStack requirements you must meet before installing Calico Open Source on OpenStack nodes.](https://docs.tigera.io/calico/latest/getting-started/openstack/requirements) ##### [Calico on OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/overview) -[Choose a method for installing Calico for OpenStack.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/overview) +[Pick an installation method for Calico Open Source on OpenStack — DevStack for evaluation, or a per-distribution path for production.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/overview) ##### [Ubuntu](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/ubuntu) -[Install Calico on OpenStack, Ubuntu nodes.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/ubuntu) +[Install Calico Open Source on an OpenStack deployment running Ubuntu compute nodes.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/ubuntu) ##### [Red Hat Enterprise Linux](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/redhat) -[Install Calico on OpenStack, Red Hat Enterprise Linux nodes.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/redhat) +[Install Calico Open Source on an OpenStack deployment running Red Hat Enterprise Linux compute nodes.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/redhat) ##### [DevStack](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/devstack) -[Quickstart to show connectivity between DevStack and Calico.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/devstack) +[Quickstart that wires Calico Open Source into a DevStack OpenStack environment to verify connectivity and policy.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/devstack) ##### [Verify your deployment](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/verification) -[Quick steps to test that your Calico-based OpenStack deployment is running correctly.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/verification) +[Verification steps that confirm a Calico Open Source OpenStack deployment is forwarding traffic and applying policy correctly.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/verification) ## Non-cluster hosts[​](#non-cluster-hosts) ##### [About non-cluster hosts](https://docs.tigera.io/calico/latest/getting-started/bare-metal/about) -[Install Calico on hosts not in a cluster with network policy, or networking and network policy.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/about) +[Install Calico Open Source on non-cluster hosts and VMs — pick between policy-only and networking-and-policy modes for protecting hosts outside Kubernetes.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/about) ##### [System requirements](https://docs.tigera.io/calico/latest/getting-started/bare-metal/requirements) -[Review node requirements for installing Calico.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/requirements) +[Operating system, kernel, and connectivity requirements for installing Calico Open Source on a non-cluster host.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/requirements) ##### [Docker container install](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/container) -[Install Calico on non-cluster hosts using a Docker container.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/container) +[Run the Calico Open Source agent on a non-cluster host inside a Docker container.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/container) ##### [Binary install with package manager](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary-mgr) -[Install Calico on non-cluster host using a package manager.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary-mgr) +[Install the Calico Open Source binary on a non-cluster host using a Linux package manager such as apt or yum.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary-mgr) ##### [Binary install without package manager](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary) -[Install Calico binary on non-cluster hosts without a package manager.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary) +[Install the Calico Open Source binary directly on a non-cluster host without using a package manager.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary) ### Calico quickstart guide @@ -4769,11 +4769,11 @@ Refer to the [Calico ConfigMap manifest](https://raw.githubusercontent.com/proje ## [📄️About non-cluster hosts](https://docs.tigera.io/calico/latest/getting-started/bare-metal/about) -[Install Calico on hosts not in a cluster with network policy, or networking and network policy.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/about) +[Install Calico Open Source on non-cluster hosts and VMs — pick between policy-only and networking-and-policy modes for protecting hosts outside Kubernetes.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/about) ## [📄️System requirements](https://docs.tigera.io/calico/latest/getting-started/bare-metal/requirements) -[Review node requirements for installing Calico.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/requirements) +[Operating system, kernel, and connectivity requirements for installing Calico Open Source on a non-cluster host.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/requirements) ## [🗃Installation](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/) @@ -4920,15 +4920,15 @@ Due to the large number of distributions and kernel version out there, it’s ha ## [📄️Docker container install](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/container) -[Install Calico on non-cluster hosts using a Docker container.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/container) +[Run the Calico Open Source agent on a non-cluster host inside a Docker container.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/container) ## [📄️Binary install with package manager](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary-mgr) -[Install Calico on non-cluster host using a package manager.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary-mgr) +[Install the Calico Open Source binary on a non-cluster host using a Linux package manager such as apt or yum.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary-mgr) ## [📄️Binary install without package manager](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary) -[Install Calico binary on non-cluster hosts without a package manager.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary) +[Install the Calico Open Source binary directly on a non-cluster host without using a package manager.](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary) ### Docker container install @@ -5450,19 +5450,19 @@ The Felix logs should transition from periodic notifications that Felix is in th ## [📄️System requirements](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/requirements) -[Review the requirements for using OpenShift with Calico.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/requirements) +[Cluster, OpenShift, and host OS requirements you must meet before installing Calico Open Source on an OpenShift 4 cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/requirements) ## [📄️Install an OpenShift 4 cluster with Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/installation) -[Install Calico on an OpenShift 4 cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/installation) +[Install Calico Open Source on a self-managed OpenShift 4 cluster using the operator-based installation flow.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/installation) ## [📄️Install Calico on an OpenShift HCP cluster](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/hostedcontrolplanes) -[Install Calico on an OpenShift Hosted Control Planes (HCP) cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/hostedcontrolplanes) +[Install Calico Open Source on an OpenShift Hosted Control Planes (HCP) cluster, where the control plane is managed and the data plane runs on user-owned nodes.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/hostedcontrolplanes) ## [📄️Migrate from OVN-Kubernetes CNI to Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/ovn-to-calico) -[Migrate from OVN Kubernetes CNI to Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/ovn-to-calico) +[Migrate an OpenShift 4 cluster from the OVN-Kubernetes CNI to Calico Open Source as the cluster networking provider.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/ovn-to-calico) ### System requirements for OpenShift @@ -6357,11 +6357,11 @@ Congratulations! You now have an RKE cluster running Calico ## [📄️Install Calico for policy and flannel (aka Canal) for networking](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/install-for-flannel) -[If you use flannel for networking, you can install Calico network policy to secure cluster communications.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/install-for-flannel) +[Install Calico Open Source network policy on an existing Flannel-networked cluster without replacing the data plane.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/install-for-flannel) ## [📄️Migrate a Kubernetes cluster from flannel/Canal to Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/migration-from-flannel) -[Preserve your existing VXLAN networking in Calico, but take full advantage of Calico IP address management (IPAM) and advanced network policy features.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/migration-from-flannel) +[Migrate from Flannel to Calico Open Source while preserving the existing VXLAN data plane, gaining Calico IPAM and advanced policy.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/migration-from-flannel) ### Install Calico for policy and flannel (aka Canal) for networking @@ -6650,27 +6650,27 @@ Learn about [Calico IP address management](https://docs.tigera.io/calico/latest/ ## [📄️Limitations and known issues](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/limitations) -[Review limitations before starting installation.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/limitations) +[Known limitations of Calico Open Source for Windows that you should review before planning an installation.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/limitations) ## [📄️Requirements](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/requirements) -[Review the requirements for Calico for Windows.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/requirements) +[Cluster and Windows host requirements you must meet before installing Calico Open Source for Windows.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/requirements) ## [📄️Install using Operator](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/operator) -[Install Calico for Windows on a Kubernetes cluster for testing or development.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/operator) +[Install Calico Open Source for Windows on a Kubernetes cluster using the operator, for testing or development.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/operator) ## [📄️Calico for Windows on a Rancher Kubernetes Engine cluster](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/rancher) -[Install Calico for Windows on a Rancher RKE cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/rancher) +[Install Calico Open Source for Windows on a Rancher RKE cluster with Windows worker nodes.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/rancher) ## [📄️Basic policy demo](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/demo) -[An interactive demo to show how to apply basic network policy to pods in a Calico for Windows cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/demo) +[Interactive demo that applies basic Calico Open Source network policy to pods running on a Windows node.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/demo) ## [📄️Troubleshoot Calico for Windows](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/troubleshoot) -[Help for troubleshooting Calico for Windows issues in Calico this release.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/troubleshoot) +[Troubleshooting guide for Calico Open Source for Windows clusters — common issues, diagnostic steps, and where to look for logs.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/troubleshoot) ### Limitations and known issues @@ -8568,11 +8568,11 @@ Example output: ## [📄️Quickstart for Calico on K3s](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/quickstart) -[Install Calico on a single-node K3s cluster for testing or development in under 5 minutes.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/quickstart) +[Quickstart that installs Calico Open Source on a single-node K3s cluster in roughly 5 minutes for testing or development.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/quickstart) ## [📄️K3s multi-node install](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/multi-node-install) -[Install Calico on a multi node K3s cluster for testing or development.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/multi-node-install) +[Install Calico Open Source on a multi-node K3s cluster for testing or development workloads.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/multi-node-install) ### Quickstart for Calico on K3s @@ -9735,51 +9735,51 @@ kind delete cluster --name dev ## [📄️Calico the hard way](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/overview) -[A tutorial for installing Calico the hard way.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/overview) +[Step-by-step tutorial intro for Calico the hard way — the cluster you build with Calico Open Source, the components installed by hand, and prerequisites.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/overview) ## [📄️Stand up Kubernetes](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/standing-up-kubernetes) -[Get a Kubernetes cluster up and running.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/standing-up-kubernetes) +[Calico the hard way — stand up a minimal Kubernetes cluster ready to receive a manual Calico Open Source installation.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/standing-up-kubernetes) ## [📄️The Calico datastore](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/the-calico-datastore) -[The central datastore for your clusters' operational and configuration state.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/the-calico-datastore) +[Calico the hard way — choose between the Kubernetes API datastore and etcd for the Calico Open Source operational and configuration store.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/the-calico-datastore) ## [📄️Configure IP pools](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/configure-ip-pools) -[Quick review of defining IP pools (IP address ranges) in clusters.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/configure-ip-pools) +[Calico the hard way — define IP pools that govern which address ranges Calico Open Source assigns to pods and services.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/configure-ip-pools) ## [📄️Install CNI plugin](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-cni-plugin) -[Steps to install the Calico Container Network Interface (CNI)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-cni-plugin) +[Calico the hard way — install the Calico Open Source CNI plugin on each node and wire it into kubelet.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-cni-plugin) ## [📄️Install Typha](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-typha) -[Learn about Typha for scaling deployment.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-typha) +[Calico the hard way — install Typha to fan out datastore reads so Calico Open Source can scale to large clusters.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-typha) ## [📄️Install calico/node](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-node) -[Configure and install calico/node as a daemon set.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-node) +[Calico the hard way — deploy calico/node as a DaemonSet so the Calico Open Source agent runs on every cluster node.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-node) ## [📄️Configure BGP peering](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/configure-bgp-peering) -[Quick review of BGP peering options.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/configure-bgp-peering) +[Calico the hard way — configure BGP peering between Calico Open Source nodes and review the available peering topologies.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/configure-bgp-peering) ## [📄️Test networking](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/test-networking) -[Test that networking works correctly.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/test-networking) +[Calico the hard way — verify pod-to-pod connectivity and routing on a cluster after the manual Calico Open Source build-out.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/test-networking) ## [📄️Test network policy](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/test-network-policy) -[Verify that network policy works correctly.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/test-network-policy) +[Calico the hard way — verify that Calico Open Source network policy enforcement is working after the manual install.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/test-network-policy) ## [📄️End user RBAC](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/end-user-rbac) -[Quick review of common roles and access controls for running clusters in production.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/end-user-rbac) +[Calico the hard way — RBAC roles and access controls that govern who can edit Calico Open Source resources in a production cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/end-user-rbac) ## [📄️Istio integration](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/istio-integration) -[Enforce Calico network policy for Istio service mesh applications.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/istio-integration) +[Calico the hard way — extend Calico Open Source policy enforcement into Istio service-mesh sidecars for layer-7 traffic.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/istio-integration) ### Calico the hard way @@ -14854,19 +14854,19 @@ Congratulations! You now have a single-host Kubernetes cluster with Calico in nf ## [📄️Get started with VPP networking](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/getting-started) -[Install Calico with the VPP data plane on a Kubernetes cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/getting-started) +[Install Calico Open Source with the VPP userspace data plane on a Kubernetes cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/getting-started) ## [📄️IPsec configuration with VPP](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/ipsec) -[Enable IPsec for faster encryption between nodes when using the VPP data plane.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/ipsec) +[Configure IPsec encryption between nodes for Calico Open Source clusters running the VPP data plane.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/ipsec) ## [📄️Details of VPP implementation & known-issues](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/specifics) -[Behavioral discrepancies when running with the Calico/VPP data plane](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/specifics) +[Behavioral differences to expect when running Calico Open Source with the VPP data plane instead of iptables or eBPF.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/specifics) ## [📄️Install an OpenShift 4 cluster with Calico VPP](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/openshift) -[Install Calico VPP on an OpenShift 4 cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/openshift) +[Install Calico Open Source with the VPP data plane on an OpenShift 4 cluster.](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/openshift) ### Get started with VPP networking @@ -15810,11 +15810,11 @@ After installing Calico VPP, you can benefit from the features of the VPP data p ## [📄️Calico for OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/overview) -[Review the Calico components used in an OpenStack deployment.](https://docs.tigera.io/calico/latest/getting-started/openstack/overview) +[Components and topology used when running Calico Open Source as the networking and policy layer for an OpenStack deployment.](https://docs.tigera.io/calico/latest/getting-started/openstack/overview) ## [📄️System requirements](https://docs.tigera.io/calico/latest/getting-started/openstack/requirements) -[Requirements for installing Calico on OpenStack nodes.](https://docs.tigera.io/calico/latest/getting-started/openstack/requirements) +[Hypervisor, OS, and OpenStack requirements you must meet before installing Calico Open Source on OpenStack nodes.](https://docs.tigera.io/calico/latest/getting-started/openstack/requirements) ## [🗃Installation](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/) @@ -15929,23 +15929,23 @@ Due to the large number of distributions and kernel version out there, it’s ha ## [📄️Calico on OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/overview) -[Choose a method for installing Calico for OpenStack.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/overview) +[Pick an installation method for Calico Open Source on OpenStack — DevStack for evaluation, or a per-distribution path for production.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/overview) ## [📄️Ubuntu](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/ubuntu) -[Install Calico on OpenStack, Ubuntu nodes.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/ubuntu) +[Install Calico Open Source on an OpenStack deployment running Ubuntu compute nodes.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/ubuntu) ## [📄️Red Hat Enterprise Linux](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/redhat) -[Install Calico on OpenStack, Red Hat Enterprise Linux nodes.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/redhat) +[Install Calico Open Source on an OpenStack deployment running Red Hat Enterprise Linux compute nodes.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/redhat) ## [📄️DevStack](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/devstack) -[Quickstart to show connectivity between DevStack and Calico.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/devstack) +[Quickstart that wires Calico Open Source into a DevStack OpenStack environment to verify connectivity and policy.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/devstack) ## [📄️Verify your deployment](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/verification) -[Quick steps to test that your Calico-based OpenStack deployment is running correctly.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/verification) +[Verification steps that confirm a Calico Open Source OpenStack deployment is forwarding traffic and applying policy correctly.](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/verification) ### Calico on OpenStack @@ -17342,13 +17342,17 @@ You can specify core configuration elements of your ingress gateway by specifyin Many customizations are available for the `GatewayAPI` resource. This resource has fields that allow some aspects of Gateway deployments to be customized. For example: -- `spec.gatewayDeployment.spec.template.metadata` allows arbitrary labels or annotations to be added to the pod that is created to implement each configured Gateway. -- `spec.gatewayDeployment.spec.template.spec.nodeSelector` allows control over where gateway implementation pods are scheduled. -- `spec.gatewayDeployment.spec.template.spec.containers` allows control over the memory and CPU that the gateway implementation pods can use. - `spec.gatewayControllerDeployment.spec.template.spec.nodeSelector` allows control over where the gateway controller is scheduled. -- `spec.gatewayDeployment.service.metadata.annotations` allows control over annotations to place on the service that is provisioned for each Gateway; these can be used, for example, to configure the cloud-specific type and properties of the associated external load balancer. -- `spec.gatewayDeployment.service.spec.*loadbalancer*` allows control over the corresponding `*loadbalancer*` fields in the service that is provisioned for each gateway; in some clouds these can also be used to configure the type and properties of the associated external load balancer. -- `spec.gatewayClasses` allows the provisioning of multiple `GatewayClass` resources, each with its own set of customizations, instead of the single `tigera-gateway-class` gateway class that the Tigera Operator provisions by default. + +- `spec.gatewayClasses` allows the provisioning of multiple `GatewayClass` resources, each with its own set of customizations, instead of the single `tigera-gateway-class` gateway class that the Tigera Operator provisions by default. Possible `GatewayClass` customizations include the following: + + + + - `gatewayDeployment.spec.template.metadata` allows arbitrary labels or annotations to be added to the pod that is created to implement each Gateway within that class. + - `gatewayDeployment.spec.template.spec.nodeSelector` allows control over where gateway implementation pods are scheduled. + - `gatewayDeployment.spec.template.spec.containers` allows control over the memory and CPU that the gateway implementation pods can use. + - `gatewayService.metadata.annotations` allows control over annotations to place on the service that is provisioned for each Gateway; these can be used, for example, to configure the cloud-specific type and properties of the associated external load balancer. + - `gatewayService.spec.*loadBalancer*` allows control over the corresponding `*loadBalancer*` fields in the service that is provisioned for each gateway; in some clouds these can also be used to configure the type and properties of the associated external load balancer. For full details, see [the `GatewayAPI` reference documentation](https://docs.tigera.io/calico/latest/reference/installation/api#gatewayapi). @@ -23462,11 +23466,11 @@ Calico provides IP connectivity, but not layer 2 (L2) adjacency, between instanc - Applications or protocols that actually require L2 adjacency - such as routing protocols like OSPF - will not run successfully on instances on a Calico network. But the vast majority of applications that are IP-based will be just fine. -Traditionally, a Neutron network has always provided L2 adjacency between its instances, so this is the first way that CCalico differs from traditional Neutron semantics. Up to and including the Mitaka release, L2 adjacency was an assumed property of a Neutron network; so deployments using Calico simply had to *understand* that Calico networks were different in this detail. +Traditionally, a Neutron network has always provided L2 adjacency between its instances, so this is the first way that Calico differs from traditional Neutron semantics. Up to and including the Mitaka release, L2 adjacency was an assumed property of a Neutron network; so deployments using Calico simply had to *understand* that Calico networks were different in this detail. As of the Newton release, Calico's IP-only connectivity is expressible in the Neutron API, as a Network whose `l2_adjacency` property is `False`. However work is still needed to make Calico networks report `l2_adjacency False`, so at the moment - unfortunately - it *still* has to be understood that Calico networks do not provide L2 adjacency, even though they report `l2_adjacency True` when queried on the API. -> **SECONDARY:** Calico's connectivity design, based on IP routing, allows unicast IP and anycast IP. Anycast IP also requires support for allowed-address-pairs, or some other way of assigning the same IP address to more than one instance; work for allowed-address-pairs support is in progress at [opendev](https://review.openstack.org/#/c/344008/). Multicast IP support is on our roadmap but not yet implemented. Broadcast IP is not possible because it depends on L2 adjacency. +> **SECONDARY:** Calico's connectivity design, based on IP routing, allows unicast IP and anycast IP. Anycast IP can be achieved using Neutron floating IPs or allowed-address-pairs, if you also configure your routing to process multiple paths for a CIDR (`add_paths` in BIRD) and to program ECMP routes into the local kernel (`merge_paths` in BIRD). Multicast IP is not supported. Broadcast IP is not possible because it depends on L2 adjacency. ## Connectivity between different Calico networks[​](#connectivity-between-different-calico-networks) @@ -25535,125 +25539,125 @@ Writing network policies is how you restrict traffic to pods in your Kubernetes ##### [Adopt a zero trust network model for security](https://docs.tigera.io/calico/latest/network-policy/adopt-zero-trust) -[Best practices to adopt a zero trust network model to secure workloads and hosts. Learn 5 key requirements to control network access for cloud-native strategy.](https://docs.tigera.io/calico/latest/network-policy/adopt-zero-trust) +[Adopt a zero-trust network model for Kubernetes workloads and hosts using Calico Open Source — five requirements for controlling network access in cloud-native environments.](https://docs.tigera.io/calico/latest/network-policy/adopt-zero-trust) ##### [Get started with Calico network policy](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-network-policy) -[Create your first Calico network policies. Shows the rich features using sample policies that extend native Kubernetes network policy.](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-network-policy) +[Write your first Calico Open Source NetworkPolicy — sample policies that exercise the rich rule features that extend Kubernetes NetworkPolicy.](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-network-policy) ##### [Calico policy tutorial](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-policy-tutorial) -[Learn how to create more advanced Calico network policies (namespace, allow and deny all ingress and egress).](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-policy-tutorial) +[Step-by-step tutorial for advanced Calico Open Source policy patterns — namespace scoping, allow-all, deny-all, and ingress and egress controls.](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-policy-tutorial) ##### [Get started with Kubernetes network policy](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-network-policy) -[Learn Kubernetes policy syntax, rules, and features for controlling network traffic.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-network-policy) +[Reference for Kubernetes NetworkPolicy syntax, rules, and features when used with the Calico Open Source enforcement engine.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-network-policy) ##### [Kubernetes policy, demo](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-demo) -[An interactive demo that visually shows how applying Kubernetes policy allows and denies connections.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-demo) +[Interactive demo for a Calico Open Source cluster that visualizes how Kubernetes NetworkPolicy allows and denies connections between pods.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-demo) ##### [Kubernetes policy, basic tutorial](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-basic) -[Learn how to use basic Kubernetes network policy to securely restrict traffic to/from pods.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-basic) +[Apply your first Kubernetes NetworkPolicy in a Calico Open Source cluster to restrict ingress and egress traffic to and from pods.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-basic) ##### [Kubernetes policy, advanced tutorial](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-advanced) -[Learn how to create more advanced Kubernetes network policies (namespace, allow and deny all ingress and egress).](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-advanced) +[Write more advanced Kubernetes NetworkPolicy resources in a Calico Open Source cluster — namespace scoping, allow-all, and deny-all variants.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-advanced) ##### [Enable a default deny policy for Kubernetes pods](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-default-deny) -[Create a default deny network policy so pods that are missing policy are not allowed traffic until appropriate network policy is defined.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-default-deny) +[Apply a default-deny network policy in a Calico Open Source cluster so unprotected pods are denied traffic until explicit policy is written.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-default-deny) ## Policy rules[​](#policy-rules) ##### [Basic rules](https://docs.tigera.io/calico/latest/network-policy/policy-rules/policy-rules-overview) -[Define network connectivity for Calico endpoints using policy rules and label selectors.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/policy-rules-overview) +[How to write policy rules in Calico Open Source — label selectors, source and destination match criteria, and rule actions.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/policy-rules-overview) ##### [Use namespace rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/namespace-policy) -[Use namespaces and namespace selectors in Calico network policy to group or separate resources. Use network policies to allow or deny traffic to/from pods that belong to specific namespaces.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/namespace-policy) +[Group or separate workloads in Calico Open Source policy using namespaces and namespace selectors so policies apply only to specified namespaces.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/namespace-policy) ##### [Use service rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-policy) -[Use Kubernetes Service names in policy rules.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-policy) +[Match on Kubernetes Service names in Calico Open Source policy rules instead of specific pod selectors.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-policy) ##### [Use service accounts rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-accounts) -[Use Kubernetes service accounts in policies to validate cryptographic identities and/or manage RBAC controlled high-priority rules across teams.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-accounts) +[Match on Kubernetes service accounts in Calico Open Source policy rules to validate workload identity and apply RBAC-controlled rules.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-accounts) ##### [Use external IPs or networks rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/external-ips-policy) -[Limit egress and ingress traffic using IP address either directly within Calico network policy or managed as Calico network sets.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/external-ips-policy) +[Restrict egress and ingress to specific IP ranges in Calico Open Source policy, either inline or via reusable network sets.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/external-ips-policy) ##### [Use ICMP/ping rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/icmp-ping) -[Control where ICMP/ping is used by creating a Calico network policy to allow and deny ICMP/ping messages for workloads and host endpoints.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/icmp-ping) +[Allow or deny ICMP and ping traffic for Calico Open Source workloads and host endpoints using policy rules.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/icmp-ping) ## Policy for hosts and VMs[​](#policy-for-hosts-and-vms) ##### [Protect hosts and VMs](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts) -[Calico network policy not only protects workloads, but also hosts. Create a Calico network policies to restrict traffic to/from hosts.](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts) +[Protect Kubernetes hosts and bare-metal nodes with Calico Open Source policy by writing rules that target host endpoints.](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts) ##### [Protect Kubernetes nodes](https://docs.tigera.io/calico/latest/network-policy/hosts/kubernetes-nodes) -[Protect Kubernetes nodes with host endpoints managed by Calico.](https://docs.tigera.io/calico/latest/network-policy/hosts/kubernetes-nodes) +[Protect Kubernetes node interfaces with Calico Open Source host endpoints to extend network policy to the node itself.](https://docs.tigera.io/calico/latest/network-policy/hosts/kubernetes-nodes) ##### [Protect hosts tutorial](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts-tutorial) -[Learn how to secure incoming traffic from outside the cluster using Calico host endpoints with network policy, including allowing controlled access to specific Kubernetes services.](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts-tutorial) +[Tutorial for protecting hosts in a Calico Open Source cluster — register host endpoints, write rules, and allow controlled access to specific Kubernetes services.](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts-tutorial) ##### [Apply policy to forwarded traffic](https://docs.tigera.io/calico/latest/network-policy/hosts/host-forwarded-traffic) -[Apply Calico network policy to traffic being forward by hosts acting as routers or NAT gateways.](https://docs.tigera.io/calico/latest/network-policy/hosts/host-forwarded-traffic) +[Apply Calico Open Source network policy to traffic forwarded through hosts acting as routers or NAT gateways.](https://docs.tigera.io/calico/latest/network-policy/hosts/host-forwarded-traffic) ## Policy for services[​](#policy-for-services) ##### [Apply Calico policy to Kubernetes node ports](https://docs.tigera.io/calico/latest/network-policy/services/kubernetes-node-ports) -[Restrict access to Kubernetes node ports using Calico global network policy. Follow the steps to secure the host, the node ports, and the cluster.](https://docs.tigera.io/calico/latest/network-policy/services/kubernetes-node-ports) +[Restrict access to Kubernetes NodePort services using Calico Open Source GlobalNetworkPolicy at the host endpoint.](https://docs.tigera.io/calico/latest/network-policy/services/kubernetes-node-ports) ##### [Apply Calico policy to services exposed externally as cluster IPs](https://docs.tigera.io/calico/latest/network-policy/services/services-cluster-ips) -[Expose Kubernetes service cluster IPs over BGP using Calico, and restrict who can access them using Calico network policy.](https://docs.tigera.io/calico/latest/network-policy/services/services-cluster-ips) +[Expose Kubernetes Service ClusterIPs over BGP using Calico Open Source and restrict who can reach them with network policy.](https://docs.tigera.io/calico/latest/network-policy/services/services-cluster-ips) ## Policy for Istio[​](#policy-for-istio) ##### [Enforce Calico network policy for Istio service mesh](https://docs.tigera.io/calico/latest/network-policy/istio/app-layer-policy) -[Enforce network policy for Istio service mesh including matching on HTTP methods and paths.](https://docs.tigera.io/calico/latest/network-policy/istio/app-layer-policy) +[Apply Calico Open Source network policy to Istio service-mesh traffic, including matching on HTTP methods and paths.](https://docs.tigera.io/calico/latest/network-policy/istio/app-layer-policy) ##### [Use HTTP methods and paths in policy rules](https://docs.tigera.io/calico/latest/network-policy/istio/http-methods) -[Create a Calico network policy for Istio-enabled apps to restrict ingress traffic matching HTTP methods or paths.](https://docs.tigera.io/calico/latest/network-policy/istio/http-methods) +[Restrict ingress traffic to Istio-enabled apps by matching HTTP methods or paths in a Calico Open Source network policy.](https://docs.tigera.io/calico/latest/network-policy/istio/http-methods) ##### [Enforce Calico network policy using Istio (tutorial)](https://docs.tigera.io/calico/latest/network-policy/istio/enforce-policy-istio) -[Learn how Calico integrates with Istio to provide fine-grained access control using Calico network policies enforced within the service mesh and network layer.](https://docs.tigera.io/calico/latest/network-policy/istio/enforce-policy-istio) +[Use Calico Open Source with Istio to apply fine-grained access control at both the network layer and inside the service mesh.](https://docs.tigera.io/calico/latest/network-policy/istio/enforce-policy-istio) ## Securing component communications[​](#securing-component-communications) ##### [Encrypt in-cluster pod traffic](https://docs.tigera.io/calico/latest/network-policy/encrypt-cluster-pod-traffic) -[Enable WireGuard for state-of-the-art cryptographic security between pods for Calico clusters.](https://docs.tigera.io/calico/latest/network-policy/encrypt-cluster-pod-traffic) +[Turn on WireGuard encryption between pods on a Calico Open Source cluster for state-of-the-art cryptographic protection of in-cluster traffic.](https://docs.tigera.io/calico/latest/network-policy/encrypt-cluster-pod-traffic) ##### [Configure encryption and authentication to secure Calico components](https://docs.tigera.io/calico/latest/network-policy/comms/crypto-auth) -[Enable TLS authentication and encryption for various Calico components.](https://docs.tigera.io/calico/latest/network-policy/comms/crypto-auth) +[Turn on TLS authentication and encryption between Calico Open Source components using a custom certificate authority.](https://docs.tigera.io/calico/latest/network-policy/comms/crypto-auth) ##### [Schedule Typha for scaling to well-known nodes](https://docs.tigera.io/calico/latest/network-policy/comms/reduce-nodes) -[Configure the Calico Typha TCP port.](https://docs.tigera.io/calico/latest/network-policy/comms/reduce-nodes) +[Configure the TCP port used by Typha in a Calico Open Source cluster to reduce datastore load on large clusters.](https://docs.tigera.io/calico/latest/network-policy/comms/reduce-nodes) ##### [Secure Calico Prometheus endpoints](https://docs.tigera.io/calico/latest/network-policy/comms/secure-metrics) -[Limit access to Calico metric endpoints using network policy.](https://docs.tigera.io/calico/latest/network-policy/comms/secure-metrics) +[Restrict access to Calico Open Source metric endpoints using network policy.](https://docs.tigera.io/calico/latest/network-policy/comms/secure-metrics) ##### [Secure BGP sessions](https://docs.tigera.io/calico/latest/network-policy/comms/secure-bgp) -[Configure BGP passwords to prevent attackers from injecting false routing information.](https://docs.tigera.io/calico/latest/network-policy/comms/secure-bgp) +[Configure BGP authentication passwords for Calico Open Source so attackers cannot inject false routing information.](https://docs.tigera.io/calico/latest/network-policy/comms/secure-bgp) ## Network policy options with Calico Cloud[​](#network-policy-options-with-calico-cloud) @@ -26015,7 +26019,7 @@ You may wish to review every security policy change request (aka pull request in ## [📄️Enable a default deny policy for Kubernetes pods](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-default-deny) -[Create a default deny network policy so pods that are missing policy are not allowed traffic until appropriate network policy is defined.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-default-deny) +[Apply a default-deny network policy in a Calico Open Source cluster so unprotected pods are denied traffic until explicit policy is written.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-default-deny) ### Calico policy @@ -26023,19 +26027,19 @@ You may wish to review every security policy change request (aka pull request in ## [📄️Get started with Calico network policy](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-network-policy) -[Create your first Calico network policies. Shows the rich features using sample policies that extend native Kubernetes network policy.](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-network-policy) +[Write your first Calico Open Source NetworkPolicy — sample policies that exercise the rich rule features that extend Kubernetes NetworkPolicy.](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-network-policy) ## [📄️Calico automatic labels](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-labels) -[Calico automatic labels for use with resources.](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-labels) +[Reference list of automatic labels Calico Open Source attaches to resources, useful as selectors in policy rules.](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-labels) ## [📄️Get started with Calico network policy for OpenStack](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/network-policy-openstack) -[Extend OpenStack security groups by applying Calico network policy and using labels to identify VMs within network policy rules.](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/network-policy-openstack) +[Extend OpenStack security groups with Calico Open Source network policy and label-based rules for VMs running on OpenStack.](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/network-policy-openstack) ## [📄️Calico policy tutorial](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-policy-tutorial) -[Learn how to create more advanced Calico network policies (namespace, allow and deny all ingress and egress).](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-policy-tutorial) +[Step-by-step tutorial for advanced Calico Open Source policy patterns — namespace scoping, allow-all, deny-all, and ingress and egress controls.](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-policy-tutorial) ### Get started with Calico network policy @@ -26938,19 +26942,19 @@ kubectl delete ns advanced-policy-demo ## [📄️Get started with Kubernetes network policy](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-network-policy) -[Learn Kubernetes policy syntax, rules, and features for controlling network traffic.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-network-policy) +[Reference for Kubernetes NetworkPolicy syntax, rules, and features when used with the Calico Open Source enforcement engine.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-network-policy) ## [📄️Kubernetes policy, demo](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-demo) -[An interactive demo that visually shows how applying Kubernetes policy allows and denies connections.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-demo) +[Interactive demo for a Calico Open Source cluster that visualizes how Kubernetes NetworkPolicy allows and denies connections between pods.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-demo) ## [📄️Kubernetes policy, basic tutorial](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-basic) -[Learn how to use basic Kubernetes network policy to securely restrict traffic to/from pods.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-basic) +[Apply your first Kubernetes NetworkPolicy in a Calico Open Source cluster to restrict ingress and egress traffic to and from pods.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-basic) ## [📄️Kubernetes policy, advanced tutorial](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-advanced) -[Learn how to create more advanced Kubernetes network policies (namespace, allow and deny all ingress and egress).](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-advanced) +[Write more advanced Kubernetes NetworkPolicy resources in a Calico Open Source cluster — namespace scoping, allow-all, and deny-all variants.](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-advanced) ### Get started with Kubernetes network policy @@ -28129,11 +28133,11 @@ spec: ## [📄️Get started with policy tiers](https://docs.tigera.io/calico/latest/network-policy/policy-tiers/tiered-policy) -[Understand how tiered policy works and supports microsegmentation.](https://docs.tigera.io/calico/latest/network-policy/policy-tiers/tiered-policy) +[How tiered policy works in Calico Open Source — evaluation order, pass actions, and using tiers to support microsegmentation.](https://docs.tigera.io/calico/latest/network-policy/policy-tiers/tiered-policy) ## [📄️Configure RBAC for tiered policies](https://docs.tigera.io/calico/latest/network-policy/policy-tiers/rbac-tiered-policies) -[Configure RBAC to control access to policies and tiers.](https://docs.tigera.io/calico/latest/network-policy/policy-tiers/rbac-tiered-policies) +[Set up Kubernetes RBAC to control which users can edit Calico Open Source policies and tiers in a multi-team cluster.](https://docs.tigera.io/calico/latest/network-policy/policy-tiers/rbac-tiered-policies) ### Get started with policy tiers @@ -29297,31 +29301,31 @@ When the result of a staged policy is satisfactory, create an identical policy a ## [📄️Basic rules](https://docs.tigera.io/calico/latest/network-policy/policy-rules/policy-rules-overview) -[Define network connectivity for Calico endpoints using policy rules and label selectors.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/policy-rules-overview) +[How to write policy rules in Calico Open Source — label selectors, source and destination match criteria, and rule actions.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/policy-rules-overview) ## [📄️Use namespace rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/namespace-policy) -[Use namespaces and namespace selectors in Calico network policy to group or separate resources. Use network policies to allow or deny traffic to/from pods that belong to specific namespaces.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/namespace-policy) +[Group or separate workloads in Calico Open Source policy using namespaces and namespace selectors so policies apply only to specified namespaces.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/namespace-policy) ## [📄️Use service rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-policy) -[Use Kubernetes Service names in policy rules.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-policy) +[Match on Kubernetes Service names in Calico Open Source policy rules instead of specific pod selectors.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-policy) ## [📄️Use service accounts rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-accounts) -[Use Kubernetes service accounts in policies to validate cryptographic identities and/or manage RBAC controlled high-priority rules across teams.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-accounts) +[Match on Kubernetes service accounts in Calico Open Source policy rules to validate workload identity and apply RBAC-controlled rules.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-accounts) ## [📄️Use external IPs or networks rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/external-ips-policy) -[Limit egress and ingress traffic using IP address either directly within Calico network policy or managed as Calico network sets.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/external-ips-policy) +[Restrict egress and ingress to specific IP ranges in Calico Open Source policy, either inline or via reusable network sets.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/external-ips-policy) ## [📄️Use ICMP/ping rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/icmp-ping) -[Control where ICMP/ping is used by creating a Calico network policy to allow and deny ICMP/ping messages for workloads and host endpoints.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/icmp-ping) +[Allow or deny ICMP and ping traffic for Calico Open Source workloads and host endpoints using policy rules.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/icmp-ping) ## [📄️Use log rules to test network policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/log-rules) -[Debug your policies with Log rules.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/log-rules) +[Add Log actions to Calico Open Source policy rules to debug which rules are matching traffic at runtime.](https://docs.tigera.io/calico/latest/network-policy/policy-rules/log-rules) ### Basic rules @@ -30315,19 +30319,19 @@ For more on the match criteria and policy actions, see: ## [📄️Protect hosts and VMs](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts) -[Calico network policy not only protects workloads, but also hosts. Create a Calico network policies to restrict traffic to/from hosts.](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts) +[Protect Kubernetes hosts and bare-metal nodes with Calico Open Source policy by writing rules that target host endpoints.](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts) ## [📄️Protect Kubernetes nodes](https://docs.tigera.io/calico/latest/network-policy/hosts/kubernetes-nodes) -[Protect Kubernetes nodes with host endpoints managed by Calico.](https://docs.tigera.io/calico/latest/network-policy/hosts/kubernetes-nodes) +[Protect Kubernetes node interfaces with Calico Open Source host endpoints to extend network policy to the node itself.](https://docs.tigera.io/calico/latest/network-policy/hosts/kubernetes-nodes) ## [📄️Protect hosts tutorial](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts-tutorial) -[Learn how to secure incoming traffic from outside the cluster using Calico host endpoints with network policy, including allowing controlled access to specific Kubernetes services.](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts-tutorial) +[Tutorial for protecting hosts in a Calico Open Source cluster — register host endpoints, write rules, and allow controlled access to specific Kubernetes services.](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts-tutorial) ## [📄️Apply policy to forwarded traffic](https://docs.tigera.io/calico/latest/network-policy/hosts/host-forwarded-traffic) -[Apply Calico network policy to traffic being forward by hosts acting as routers or NAT gateways.](https://docs.tigera.io/calico/latest/network-policy/hosts/host-forwarded-traffic) +[Apply Calico Open Source network policy to traffic forwarded through hosts acting as routers or NAT gateways.](https://docs.tigera.io/calico/latest/network-policy/hosts/host-forwarded-traffic) ### Protect hosts and VMs @@ -31297,11 +31301,11 @@ For completeness, you could also create a HostEndpoint for eth0, but because we ## [📄️Apply Calico policy to Kubernetes node ports](https://docs.tigera.io/calico/latest/network-policy/services/kubernetes-node-ports) -[Restrict access to Kubernetes node ports using Calico global network policy. Follow the steps to secure the host, the node ports, and the cluster.](https://docs.tigera.io/calico/latest/network-policy/services/kubernetes-node-ports) +[Restrict access to Kubernetes NodePort services using Calico Open Source GlobalNetworkPolicy at the host endpoint.](https://docs.tigera.io/calico/latest/network-policy/services/kubernetes-node-ports) ## [📄️Apply Calico policy to services exposed externally as cluster IPs](https://docs.tigera.io/calico/latest/network-policy/services/services-cluster-ips) -[Expose Kubernetes service cluster IPs over BGP using Calico, and restrict who can access them using Calico network policy.](https://docs.tigera.io/calico/latest/network-policy/services/services-cluster-ips) +[Expose Kubernetes Service ClusterIPs over BGP using Calico Open Source and restrict who can reach them with network policy.](https://docs.tigera.io/calico/latest/network-policy/services/services-cluster-ips) ### Apply Calico policy to Kubernetes node ports @@ -31755,15 +31759,15 @@ In the previous example policies, the label **k8s-role: node** is used to identi ## [📄️Enforce Calico network policy for Istio service mesh](https://docs.tigera.io/calico/latest/network-policy/istio/app-layer-policy) -[Enforce network policy for Istio service mesh including matching on HTTP methods and paths.](https://docs.tigera.io/calico/latest/network-policy/istio/app-layer-policy) +[Apply Calico Open Source network policy to Istio service-mesh traffic, including matching on HTTP methods and paths.](https://docs.tigera.io/calico/latest/network-policy/istio/app-layer-policy) ## [📄️Use HTTP methods and paths in policy rules](https://docs.tigera.io/calico/latest/network-policy/istio/http-methods) -[Create a Calico network policy for Istio-enabled apps to restrict ingress traffic matching HTTP methods or paths.](https://docs.tigera.io/calico/latest/network-policy/istio/http-methods) +[Restrict ingress traffic to Istio-enabled apps by matching HTTP methods or paths in a Calico Open Source network policy.](https://docs.tigera.io/calico/latest/network-policy/istio/http-methods) ## [📄️Enforce Calico network policy using Istio (tutorial)](https://docs.tigera.io/calico/latest/network-policy/istio/enforce-policy-istio) -[Learn how Calico integrates with Istio to provide fine-grained access control using Calico network policies enforced within the service mesh and network layer.](https://docs.tigera.io/calico/latest/network-policy/istio/enforce-policy-istio) +[Use Calico Open Source with Istio to apply fine-grained access control at both the network layer and inside the service mesh.](https://docs.tigera.io/calico/latest/network-policy/istio/enforce-policy-istio) ### Enforce Calico network policy for Istio service mesh @@ -32681,11 +32685,11 @@ We have left out the JSON formatting because we do not expect to get a valid JSO ## [📄️Enable extreme high-connection workloads](https://docs.tigera.io/calico/latest/network-policy/extreme-traffic/high-connection-workloads) -[Create a Calico network policy rule to bypass Linux conntrack for traffic to workloads that experience extremely large number of connections.](https://docs.tigera.io/calico/latest/network-policy/extreme-traffic/high-connection-workloads) +[Bypass Linux conntrack with a Calico Open Source policy rule for workloads that handle an extreme number of concurrent connections.](https://docs.tigera.io/calico/latest/network-policy/extreme-traffic/high-connection-workloads) ## [📄️Defend against DoS attacks](https://docs.tigera.io/calico/latest/network-policy/extreme-traffic/defend-dos-attack) -[Define DoS mitigation rules in Calico policy to quickly drop connections when under attack. Learn how rules use eBPF and XDP, including hardware offload when available.](https://docs.tigera.io/calico/latest/network-policy/extreme-traffic/defend-dos-attack) +[Define DoS mitigation rules in Calico Open Source policy that drop connections at the eBPF or XDP layer, with hardware offload when available.](https://docs.tigera.io/calico/latest/network-policy/extreme-traffic/defend-dos-attack) ### Enable extreme high-connection workloads @@ -33224,19 +33228,19 @@ To disable WireGuard on all nodes modify the default Felix configuration. For ex ## [📄️Configure encryption and authentication to secure Calico components](https://docs.tigera.io/calico/latest/network-policy/comms/crypto-auth) -[Enable TLS authentication and encryption for various Calico components.](https://docs.tigera.io/calico/latest/network-policy/comms/crypto-auth) +[Turn on TLS authentication and encryption between Calico Open Source components using a custom certificate authority.](https://docs.tigera.io/calico/latest/network-policy/comms/crypto-auth) ## [📄️Schedule Typha for scaling to well-known nodes](https://docs.tigera.io/calico/latest/network-policy/comms/reduce-nodes) -[Configure the Calico Typha TCP port.](https://docs.tigera.io/calico/latest/network-policy/comms/reduce-nodes) +[Configure the TCP port used by Typha in a Calico Open Source cluster to reduce datastore load on large clusters.](https://docs.tigera.io/calico/latest/network-policy/comms/reduce-nodes) ## [📄️Secure Calico Prometheus endpoints](https://docs.tigera.io/calico/latest/network-policy/comms/secure-metrics) -[Limit access to Calico metric endpoints using network policy.](https://docs.tigera.io/calico/latest/network-policy/comms/secure-metrics) +[Restrict access to Calico Open Source metric endpoints using network policy.](https://docs.tigera.io/calico/latest/network-policy/comms/secure-metrics) ## [📄️Secure BGP sessions](https://docs.tigera.io/calico/latest/network-policy/comms/secure-bgp) -[Configure BGP passwords to prevent attackers from injecting false routing information.](https://docs.tigera.io/calico/latest/network-policy/comms/secure-bgp) +[Configure BGP authentication passwords for Calico Open Source so attackers cannot inject false routing information.](https://docs.tigera.io/calico/latest/network-policy/comms/secure-bgp) ### Configure encryption and authentication to secure Calico components @@ -39873,6 +39877,7 @@ Resource Types - [Goldmane](#goldmane) - [ImageSet](#imageset) - [Installation](#installation) +- [Istio](#istio) - [TigeraStatus](#tigerastatus) - [Whisker](#whisker) @@ -40012,10 +40017,11 @@ APIServerSpec defines the desired state of Tigera API server. - [APIServer](#apiserver) -| Field | Description | -| ------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `logging` *[APIServerPodLogging](#apiserverpodlogging)* | (Optional) | -| `apiServerDeployment` *[APIServerDeployment](#apiserverdeployment)* | APIServerDeployment configures the calico-apiserver Deployment. If used in conjunction with ControlPlaneNodeSelector or ControlPlaneTolerations, then these overrides take precedence. | +| Field | Description | +| ---------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `logging` *[APIServerPodLogging](#apiserverpodlogging)* | (Optional) | +| `apiServerDeployment` *[APIServerDeployment](#apiserverdeployment)* | APIServerDeployment configures the calico-apiserver Deployment. If used in conjunction with ControlPlaneNodeSelector or ControlPlaneTolerations, then these overrides take precedence. | +| `calicoWebhooksDeployment` *[CalicoWebhooksDeployment](#calicowebhooksdeployment)* | (Optional) CalicoWebhooksDeployment configures the calico-webhooks Deployment. | ### APIServerStatus[​](#apiserverstatus) @@ -40298,6 +40304,7 @@ CalicoNetworkSpec specifies configuration options for Calico provided pod networ | `bpfNetworkBootstrap` *[BPFNetworkBootstrapType](#bpfnetworkbootstraptype)* | (Optional) BPFNetworkBootstrap manages the initial networking setup required to configure the BPF dataplane. When enabled, the operator tries to bootstraps access to the Kubernetes API Server by using the Kubernetes service and its associated endpoints. This field should be enabled only if linuxDataplane is set to "BPF". If another dataplane is selected, this field must be omitted or explicitly set to Disabled. When disabled and linuxDataplane is BPF, you must manually provide the Kubernetes API Server information via the "kubernetes-service-endpoint" ConfigMap. It is invalid to use both the ConfigMap and have this field set to true at the same time. Default: Disabled | | `kubeProxyManagement` *[KubeProxyManagementType](#kubeproxymanagementtype)* | (Optional) KubeProxyManagement controls whether the operator manages the kube-proxy DaemonSet. When enabled, the operator will manage the DaemonSet by patching it: it disables kube-proxy if the dataplane is BPF, or enables it otherwise. Default: Disabled | | `bgp` *[BGPOption](#bgpoption)* | (Optional) BGP configures whether or not to enable Calico's BGP capabilities. | +| `clusterRoutingMode` *[ClusterRoutingMode](#clusterroutingmode)* | (Optional) ClusterRoutingMode controls how nodes get a route to a workload on another node, when that workload's IP comes from an IP Pool with vxlanMode: Never. When ClusterRoutingMode is BIRD, confd and BIRD program that route. When ClusterRoutingMode is Felix, it is expected that Felix will program that route. Felix always programs such routes for IP Pools with vxlanMode: Always or vxlanMode: CrossSubnet. \[Default: BIRD] | | `ipPools` *[IPPool](#ippool) array* | (Optional) IPPools contains a list of IP pools to manage. If nil, a single IPv4 IP pool will be created by the operator. If an empty list is provided, the operator will not create any IP pools and will instead wait for IP pools to be created out-of-band. IP pools in this list will be reconciled by the operator and should not be modified out-of-band. | | `mtu` *integer* | (Optional) MTU specifies the maximum transmission unit to use on the pod network. If not specified, Calico will perform MTU auto-detection based on the cluster network. | | `nodeAddressAutodetectionV4` *[NodeAddressAutodetection](#nodeaddressautodetection)* | (Optional) NodeAddressAutodetectionV4 specifies an approach to automatically detect node IPv4 addresses. If not specified, will use default auto-detection settings to acquire an IPv4 address for each node. | @@ -40412,10 +40419,10 @@ CalicoNodeWindowsDaemonSetContainer is a calico-node-windows DaemonSet container - [CalicoNodeWindowsDaemonSetPodSpec](#caliconodewindowsdaemonsetpodspec) -| Field | Description | -| --------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `name` *string* | Name is an enum which identifies the calico-node-windows DaemonSet container by name. Supported values are: calico-node-windows | -| `resources` *[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)* | (Optional) Resources allows customization of limits and requests for compute resources such as cpu and memory. If specified, this overrides the named calico-node-windows DaemonSet container's resources. If omitted, the calico-node-windows DaemonSet will use its default value for this container's resources. If used in conjunction with the deprecated ComponentResources, then this value takes precedence. | +| Field | Description | +| --------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `name` *string* | Name is an enum which identifies the calico-node-windows DaemonSet container by name. Supported values are: node, felix, confd calico-node-windows is allowed because it was previously allowed. | +| `resources` *[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)* | (Optional) Resources allows customization of limits and requests for compute resources such as cpu and memory. If specified, this overrides the named DaemonSet container's resources. If omitted, the DaemonSet will use its default value for this container's resources. If used in conjunction with the deprecated ComponentResources, then this value takes precedence. | ### CalicoNodeWindowsDaemonSetInitContainer[​](#caliconodewindowsdaemonsetinitcontainer) @@ -40472,6 +40479,88 @@ CalicoNodeWindowsDaemonSetSpec defines configuration for the calico-node-windows | `minReadySeconds` *integer* | (Optional) MinReadySeconds is the minimum number of seconds for which a newly created DaemonSet pod should be ready without any of its container crashing, for it to be considered available. If specified, this overrides any minReadySeconds value that may be set on the calico-node-windows DaemonSet. If omitted, the calico-node-windows DaemonSet will use its default value for minReadySeconds. | | `template` *[CalicoNodeWindowsDaemonSetPodTemplateSpec](#caliconodewindowsdaemonsetpodtemplatespec)* | (Optional) Template describes the calico-node-windows DaemonSet pod that will be created. | +### CalicoWebhooksDeployment[​](#calicowebhooksdeployment) + +CalicoWebhooksDeployment is the configuration for the calico-webhooks Deployment. + +*Appears in:* + +- [APIServerSpec](#apiserverspec) + +| Field | Description | +| ---------------------------------------------------------------------- | -------------------------------------------------------------------------- | +| `metadata` *[Metadata](#metadata)* | (Optional) Refer to Kubernetes API documentation for fields of `metadata`. | +| `spec` *[CalicoWebhooksDeploymentSpec](#calicowebhooksdeploymentspec)* | (Optional) Spec is the specification of the calico-webhooks Deployment. | + +### CalicoWebhooksDeploymentContainer[​](#calicowebhooksdeploymentcontainer) + +CalicoWebhooksDeploymentContainer is a calico-webhooks Deployment container. + +*Appears in:* + +- [CalicoWebhooksDeploymentPodSpec](#calicowebhooksdeploymentpodspec) + +| Field | Description | +| --------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `name` *string* | Name is an enum which identifies the calico-webhooks Deployment container by name. Supported values are: calico-webhooks | +| `resources` *[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)* | (Optional) Resources allows customization of limits and requests for compute resources such as cpu and memory. If specified, this overrides the named calico-webhooks Deployment container's resources. If omitted, the calico-webhooks Deployment will use its default value for this container's resources. | +| `ports` *[CalicoWebhooksDeploymentContainerPort](#calicowebhooksdeploymentcontainerport) array* | (Optional) Ports allows customization of the calico-webhooks container's ports. If specified, this overrides the default container port configuration. If omitted, the calico-webhooks Deployment will use its default port (6443). | + +### CalicoWebhooksDeploymentContainerPort[​](#calicowebhooksdeploymentcontainerport) + +CalicoWebhooksDeploymentContainerPort defines a port override for a calico-webhooks container. + +*Appears in:* + +- [CalicoWebhooksDeploymentContainer](#calicowebhooksdeploymentcontainer) + +| Field | Description | +| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | +| `name` *string* | Name is an enum which identifies the calico-webhooks Deployment container port by name. Supported values are: calico-webhooks | +| `containerPort` *integer* | Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. | + +### CalicoWebhooksDeploymentPodSpec[​](#calicowebhooksdeploymentpodspec) + +CalicoWebhooksDeploymentPodSpec is the calico-webhooks Deployment's PodSpec. + +*Appears in:* + +- [CalicoWebhooksDeploymentPodTemplateSpec](#calicowebhooksdeploymentpodtemplatespec) + +| Field | Description | +| --------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `containers` *[CalicoWebhooksDeploymentContainer](#calicowebhooksdeploymentcontainer) array* | (Optional) Containers is a list of calico-webhooks containers. If specified, this overrides the specified calico-webhooks Deployment containers. If omitted, the calico-webhooks Deployment will use its default values for its containers. | +| `affinity` *[Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#affinity-v1-core)* | (Optional) Affinity is a group of affinity scheduling rules for the calico-webhooks pods. If specified, this overrides any affinity that may be set on the calico-webhooks Deployment. If omitted, the calico-webhooks Deployment will use its default value for affinity. WARNING: Please note that this field will override the default calico-webhooks Deployment affinity. | +| `nodeSelector` *object (keys:string, values:string)* | NodeSelector is the calico-webhooks pod's scheduling constraints. If specified, each of the key/value pairs are added to the calico-webhooks Deployment nodeSelector provided the key does not already exist in the object's nodeSelector. If used in conjunction with ControlPlaneNodeSelector, that nodeSelector is set on the calico-webhooks Deployment and each of this field's key/value pairs are added to the calico-webhooks Deployment nodeSelector provided the key does not already exist in the object's nodeSelector. If omitted, the calico-webhooks Deployment will use its default value for nodeSelector. WARNING: Please note that this field will modify the default calico-webhooks Deployment nodeSelector. | +| `tolerations` *[Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#toleration-v1-core) array* | (Optional) Tolerations is the calico-webhooks pod's tolerations. If specified, this overrides any tolerations that may be set on the calico-webhooks Deployment. If omitted, the calico-webhooks Deployment will use its default value for tolerations. WARNING: Please note that this field will override the default calico-webhooks Deployment tolerations. | +| `hostNetwork` *boolean* | (Optional) HostNetwork forces the webhook pod to use the host's network namespace. When true, the webhook pod will run with hostNetwork=true and DNSPolicy=ClusterFirstWithHostNet. When nil or omitted, the operator auto-detects whether host networking is required (e.g., for EKS/TKG with Calico CNI). | + +### CalicoWebhooksDeploymentPodTemplateSpec[​](#calicowebhooksdeploymentpodtemplatespec) + +CalicoWebhooksDeploymentPodTemplateSpec is the calico-webhooks Deployment's PodTemplateSpec + +*Appears in:* + +- [CalicoWebhooksDeploymentSpec](#calicowebhooksdeploymentspec) + +| Field | Description | +| ---------------------------------------------------------------------------- | -------------------------------------------------------------------------- | +| `metadata` *[Metadata](#metadata)* | (Optional) Refer to Kubernetes API documentation for fields of `metadata`. | +| `spec` *[CalicoWebhooksDeploymentPodSpec](#calicowebhooksdeploymentpodspec)* | (Optional) Spec is the calico-webhooks Deployment's PodSpec. | + +### CalicoWebhooksDeploymentSpec[​](#calicowebhooksdeploymentspec) + +CalicoWebhooksDeploymentSpec defines configuration for the calico-webhooks Deployment. + +*Appears in:* + +- [CalicoWebhooksDeployment](#calicowebhooksdeployment) + +| Field | Description | +| ------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `minReadySeconds` *integer* | (Optional) MinReadySeconds is the minimum number of seconds for which a newly created Deployment pod should be ready without any of its container crashing, for it to be considered available. If specified, this overrides any minReadySeconds value that may be set on the calico-webhooks Deployment. If omitted, the calico-webhooks Deployment will use its default value for minReadySeconds. | +| `template` *[CalicoWebhooksDeploymentPodTemplateSpec](#calicowebhooksdeploymentpodtemplatespec)* | (Optional) Template describes the calico-webhooks Deployment pod that will be created. | + ### CalicoWindowsUpgradeDaemonSet[​](#calicowindowsupgradedaemonset) Deprecated. The CalicoWindowsUpgradeDaemonSet is deprecated and will be removed from the API in the future. CalicoWindowsUpgradeDaemonSet is the configuration for the calico-windows-upgrade DaemonSet. @@ -40554,6 +40643,25 @@ CertificateManagement configures pods to submit a CertificateSigningRequest to t | `keyAlgorithm` *string* | (Optional) Specify the algorithm used by pods to generate a key pair that is associated with the X.509 certificate request. Default: RSAWithSize2048 | | `signatureAlgorithm` *string* | (Optional) Specify the algorithm used for the signature of the X.509 certificate request. Default: SHA256WithRSA | +### ClusterRoutingMode[​](#clusterroutingmode) + +*Underlying type:* *string* + +ClusterRoutingMode describes the mode of cluster routing. + +*Validation:* + +- Enum: \[BIRD Felix] + +*Appears in:* + +- [CalicoNetworkSpec](#caliconetworkspec) + +| Value | Description | +| ------- | ----------- | +| `BIRD` | | +| `Felix` | | + ### ComponentName[​](#componentname) *Underlying type:* *string* @@ -41119,9 +41227,10 @@ GoldmaneDeploymentSpec defines configuration for the goldmane Deployment. - [Goldmane](#goldmane) -| Field | Description | -| ---------------------------------------------------------------- | ----------- | -| `goldmaneDeployment` *[GoldmaneDeployment](#goldmanedeployment)* | | +| Field | Description | +| ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `goldmaneDeployment` *[GoldmaneDeployment](#goldmanedeployment)* | | +| `metricsPort` *integer* | (Optional) MetricsPort configures the port that Goldmane uses to serve Prometheus metrics. When set to a non-zero value, Goldmane will expose a /metrics endpoint on the given port. Set to zero to disable metrics. If omitted, metrics are disabled. | ### GoldmaneStatus[​](#goldmanestatus) @@ -41225,7 +41334,7 @@ IPAMSpec contains configuration for pod IP address management. ### ImageSet[​](#imageset) -ImageSet is used to specify image digests for the images that the operator deploys. The name of the ImageSet is expected to be in the format `-`. The `variant` used is `enterprise` if the InstallationSpec Variant is `CalicoEnterprise` otherwise it is `calico`. The `release` must match the version of the variant that the operator is built to deploy, this version can be obtained by passing the `--version` flag to the operator binary. +ImageSet is used to specify image digests for the images that the operator deploys. The name of the ImageSet is expected to be in the format `-`. The `variant` used is `enterprise` if the InstallationSpec Variant is `TigeraSecureEnterprise` otherwise it is `calico`. The `release` must match the version of the variant that the operator is built to deploy, this version can be obtained by passing the `--version` flag to the operator binary. | Field | Description | | ------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------- | @@ -41269,7 +41378,7 @@ InstallationSpec defines configuration for a Calico or Calico Enterprise install | Field | Description | | ------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `variant` *[ProductVariant](#productvariant)* | (Optional) Variant is the product to install - one of Calico or CalicoEnterprise Default: Calico | +| `variant` *[ProductVariant](#productvariant)* | (Optional) Variant is the product to install - one of Calico or TigeraSecureEnterprise Default: Calico | | `registry` *string* | (Optional) Registry is the default Docker registry used for component Docker images. If specified then the given value must end with a slash character (`/`) and all images will be pulled from this registry. If not specified then the default registries will be used. A special case value, UseDefault, is supported to explicitly specify the default registries will be used. Image format: `/:` This option allows configuring the `` portion of the above format. | | `imagePath` *string* | (Optional) ImagePath allows for the path part of an image to be specified. If specified then the specified value will be used as the image path for each image. If not specified or empty, the default for each image will be used. A special case value, UseDefault, is supported to explicitly specify the default image path will be used for each image. Image format: `/:` This option allows configuring the `` portion of the above format. | | `imagePrefix` *string* | (Optional) ImagePrefix allows for the prefix part of an image to be specified. If specified then the given value will be used as a prefix on each image. If not specified or empty, no prefix will be used. A special case value, UseDefault, is supported to explicitly specify the default image prefix will be used for each image. Image format: `/:` This option allows configuring the `` portion of the above format. | @@ -41289,7 +41398,7 @@ InstallationSpec defines configuration for a Calico or Calico Enterprise install | `componentResources` *[ComponentResource](#componentresource) array* | (Optional) Deprecated. Please use CalicoNodeDaemonSet, TyphaDeployment, and KubeControllersDeployment. ComponentResources can be used to customize the resource requirements for each component. Node, Typha, and KubeControllers are supported for installations. | | `certificateManagement` *[CertificateManagement](#certificatemanagement)* | (Optional) CertificateManagement configures pods to submit a CertificateSigningRequest to the certificates.k8s.io/v1 API in order to obtain TLS certificates. This feature requires that you bring your own CSR signing and approval process, otherwise pods will be stuck during initialization. | | `tlsCipherSuites` *[TLSCipherSuites](#tlsciphersuites)* | (Optional) TLSCipherSuites defines the cipher suite list that the TLS protocol should use during secure communication. | -| `nonPrivileged` *[NonPrivilegedType](#nonprivilegedtype)* | (Optional) NonPrivileged configures Calico to be run in non-privileged containers as non-root users where possible. | +| `nonPrivileged` *[NonPrivilegedType](#nonprivilegedtype)* | (Optional) Deprecated. NonPrivileged is deprecated and will be removed from the API in a future release. Enabling this field is not supported and will cause errors. NonPrivileged configures Calico to be run in non-privileged containers as non-root users where possible. | | `calicoNodeDaemonSet` *[CalicoNodeDaemonSet](#caliconodedaemonset)* | (Optional) CalicoNodeDaemonSet configures the calico-node DaemonSet. If used in conjunction with the deprecated ComponentResources, then these overrides take precedence. | | `csiNodeDriverDaemonSet` *[CSINodeDriverDaemonSet](#csinodedriverdaemonset)* | (Optional) CSINodeDriverDaemonSet configures the csi-node-driver DaemonSet. | | `calicoKubeControllersDeployment` *[CalicoKubeControllersDeployment](#calicokubecontrollersdeployment)* | (Optional) CalicoKubeControllersDeployment configures the calico-kube-controllers Deployment. If used in conjunction with the deprecated ComponentResources, then these overrides take precedence. | @@ -41313,13 +41422,154 @@ InstallationStatus defines the observed state of the Calico or Calico Enterprise | Field | Description | | ------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `variant` *[ProductVariant](#productvariant)* | Variant is the most recently observed installed variant - one of Calico or CalicoEnterprise | +| `variant` *[ProductVariant](#productvariant)* | Variant is the most recently observed installed variant - one of Calico or TigeraSecureEnterprise | | `mtu` *integer* | MTU is the most recently observed value for pod network MTU. This may be an explicitly configured value, or based on Calico's native auto-detetion. | | `imageSet` *string* | (Optional) ImageSet is the name of the ImageSet being used, if there is an ImageSet that is being used. If an ImageSet is not being used then this will not be set. | | `computed` *[InstallationSpec](#installationspec)* | (Optional) Computed is the final installation including overlaid resources. | | `calicoVersion` *string* | CalicoVersion shows the current running version of calico. CalicoVersion along with Variant is needed to know the exact version deployed. | | `conditions` *[Condition](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#condition-v1-meta) array* | (Optional) Conditions represents the latest observed set of conditions for the component. A component may be one or more of Ready, Progressing, Degraded or other customer types. | +### Istio[​](#istio) + +Istio is the Schema for the istios API + +| Field | Description | +| ------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------- | +| `apiVersion` *string* | `operator.tigera.io/v1` | +| `kind` *string* | `Istio` | +| `metadata` *[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)* | Refer to Kubernetes API documentation for fields of `metadata`. | +| `spec` *[IstioSpec](#istiospec)* | | +| `status` *[IstioStatus](#istiostatus)* | | + +### IstioCNIDaemonset[​](#istiocnidaemonset) + +IstioCNIDaemonset defines customized settings for the Istio CNI plugin. + +*Appears in:* + +- [IstioSpec](#istiospec) + +| Field | Description | +| -------------------------------------------------------- | ---------------------------------------------------------------------------------- | +| `spec` *[IstioCNIDaemonsetSpec](#istiocnidaemonsetspec)* | (Optional) Spec allows users to specify custom fields for the Istio CNI Daemonset. | + +### IstioCNIDaemonsetPodSpec[​](#istiocnidaemonsetpodspec) + +IstioCNIDaemonsetPodSpec defines the pod spec for customizing the Istio CNI Daemonset. + +*Appears in:* + +- [IstioCNIDaemonsetSpecTemplate](#istiocnidaemonsetspectemplate) + +| Field | Description | +| --------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | +| `affinity` *[Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#affinity-v1-core)* | (Optional) Affinity specifies the affinity for the deployment. | +| `nodeSelector` *object (keys:string, values:string)* | (Optional) NodeSelector specifies the node affinity for the deployment. | +| `resources` *[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)* | (Optional) Resources specifies the compute resources required for the deployment. | +| `tolerations` *[Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#toleration-v1-core) array* | (Optional) Tolerations specifies the tolerations for the deployment. | + +### IstioCNIDaemonsetSpec[​](#istiocnidaemonsetspec) + +IstioCNIDaemonsetSpec defines the spec for customizing the Istio CNI Daemonset. + +*Appears in:* + +- [IstioCNIDaemonset](#istiocnidaemonset) + +| Field | Description | +| ---------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- | +| `template` *[IstioCNIDaemonsetSpecTemplate](#istiocnidaemonsetspectemplate)* | (Optional) Template allows users to specify custom fields for the Istio CNI Daemonset. | + +### IstioCNIDaemonsetSpecTemplate[​](#istiocnidaemonsetspectemplate) + +IstioCNIDaemonsetSpecTemplate defines the template for customizing the Istio CNI Daemonset. + +*Appears in:* + +- [IstioCNIDaemonsetSpec](#istiocnidaemonsetspec) + +| Field | Description | +| -------------------------------------------------------------- | ---------------------------------------------------------------------------------- | +| `spec` *[IstioCNIDaemonsetPodSpec](#istiocnidaemonsetpodspec)* | (Optional) Spec allows users to specify custom fields for the Istio CNI Daemonset. | + +### IstioSpec[​](#istiospec) + +IstioSpec defines the desired state of Istio + +*Appears in:* + +- [Istio](#istio) + +| Field | Description | +| ---------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | +| `istiod` *[IstiodDeployment](#istioddeployment)* | (Optional) IstiodDeployment defines the resource requirements and node selector for the Istio deployment. | +| `istioCNI` *[IstioCNIDaemonset](#istiocnidaemonset)* | (Optional) IstioCNIDaemonset defines the resource requirements for the Istio CNI plugin. | +| `ztunnel` *[ZTunnelDaemonset](#ztunneldaemonset)* | (Optional) ZTunnelDaemonset defines the resource requirements for the ZTunnelDaemonset component. | +| `dscpMark` *[DSCP](#dscp)* | (Optional) DSCPMark define the value of the DSCP mark done by Felix and recognised by Istio CNI for Transparent NetworkPolicies. | + +### IstioStatus[​](#istiostatus) + +IstioStatus defines the observed state of Istio + +*Appears in:* + +- [Istio](#istio) + +| Field | Description | +| ------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `conditions` *[Condition](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#condition-v1-meta) array* | Conditions represents the latest observed set of conditions for the component. A component may be one or more of Ready, Progressing, Degraded or other customer types. | + +### IstiodDeployment[​](#istioddeployment) + +IstiodDeployment defines customized settings for the Istio deployment. + +*Appears in:* + +- [IstioSpec](#istiospec) + +| Field | Description | +| ------------------------------------------------------ | -------------------------------------------------------------------------------- | +| `spec` *[IstiodDeploymentSpec](#istioddeploymentspec)* | (Optional) Spec allows users to specify custom fields for the Istiod Deployment. | + +### IstiodDeploymentPodSpec[​](#istioddeploymentpodspec) + +IstiodDeploymentPodSpec defines the pod spec for customizing the Istiod Deployment. + +*Appears in:* + +- [IstiodDeploymentSpecTemplate](#istioddeploymentspectemplate) + +| Field | Description | +| --------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | +| `affinity` *[Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#affinity-v1-core)* | (Optional) Affinity specifies the affinity for the deployment. | +| `nodeSelector` *object (keys:string, values:string)* | (Optional) NodeSelector specifies the node affinity for the deployment. | +| `resources` *[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)* | (Optional) Resources specifies the compute resources required for the deployment. | +| `tolerations` *[Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#toleration-v1-core) array* | (Optional) Tolerations specifies the tolerations for the deployment. | + +### IstiodDeploymentSpec[​](#istioddeploymentspec) + +IstiodDeploymentSpec defines the spec for customizing the Istiod Deployment. + +*Appears in:* + +- [IstiodDeployment](#istioddeployment) + +| Field | Description | +| -------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ | +| `template` *[IstiodDeploymentSpecTemplate](#istioddeploymentspectemplate)* | (Optional) Template allows users to specify custom fields for the Istiod Deployment. | + +### IstiodDeploymentSpecTemplate[​](#istioddeploymentspectemplate) + +IstiodDeploymentSpecTemplate defines the template for customizing the Istiod Deployment. + +*Appears in:* + +- [IstiodDeploymentSpec](#istioddeploymentspec) + +| Field | Description | +| ------------------------------------------------------------ | -------------------------------------------------------------------------------- | +| `spec` *[IstiodDeploymentPodSpec](#istioddeploymentpodspec)* | (Optional) Spec allows users to specify custom fields for the Istiod Deployment. | + ### KubeProxyManagementType[​](#kubeproxymanagementtype) *Underlying type:* *string* @@ -41441,6 +41691,8 @@ Metadata contains the standard Kubernetes labels and annotations fields. - [CalicoNodeDaemonSetPodTemplateSpec](#caliconodedaemonsetpodtemplatespec) - [CalicoNodeWindowsDaemonSet](#caliconodewindowsdaemonset) - [CalicoNodeWindowsDaemonSetPodTemplateSpec](#caliconodewindowsdaemonsetpodtemplatespec) +- [CalicoWebhooksDeployment](#calicowebhooksdeployment) +- [CalicoWebhooksDeploymentPodTemplateSpec](#calicowebhooksdeploymentpodtemplatespec) - [CalicoWindowsUpgradeDaemonSet](#calicowindowsupgradedaemonset) - [CalicoWindowsUpgradeDaemonSetPodTemplateSpec](#calicowindowsupgradedaemonsetpodtemplatespec) - [GatewayCertgenJob](#gatewaycertgenjob) @@ -41589,7 +41841,7 @@ One of: Enabled, Disabled ProductVariant represents the variant of the product. -One of: Calico, CalicoEnterprise +One of: Calico, TigeraSecureEnterprise *Appears in:* @@ -42007,6 +42259,57 @@ WhiskerStatus defines the observed state of Whisker | `vxlanMACPrefix` *string* | (Optional) VXLANMACPrefix is the prefix used when generating MAC addresses for virtual NICs | | `vxlanAdapter` *string* | (Optional) VXLANAdapter is the Network Adapter used for VXLAN, leave blank for primary NIC | +### ZTunnelDaemonset[​](#ztunneldaemonset) + +ZTunnelDaemonset defines customized settings for the ZTunnelDaemonset component. + +*Appears in:* + +- [IstioSpec](#istiospec) + +| Field | Description | +| ------------------------------------------------------ | -------------------------------------------------------------------------------- | +| `spec` *[ZTunnelDaemonsetSpec](#ztunneldaemonsetspec)* | (Optional) Spec allows users to specify custom fields for the ZTunnel Daemonset. | + +### ZTunnelDaemonsetPodSpec[​](#ztunneldaemonsetpodspec) + +ZTunnelDaemonsetPodSpec defines the pod spec for customizing the ZTunnel Daemonset. + +*Appears in:* + +- [ZTunnelDaemonsetSpecTemplate](#ztunneldaemonsetspectemplate) + +| Field | Description | +| --------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | +| `affinity` *[Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#affinity-v1-core)* | (Optional) Affinity specifies the affinity for the deployment. | +| `nodeSelector` *object (keys:string, values:string)* | (Optional) NodeSelector specifies the node affinity for the deployment. | +| `resources` *[ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#resourcerequirements-v1-core)* | (Optional) Resources specifies the compute resources required for the deployment. | +| `tolerations` *[Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#toleration-v1-core) array* | (Optional) Tolerations specifies the tolerations for the deployment. | + +### ZTunnelDaemonsetSpec[​](#ztunneldaemonsetspec) + +ZTunnelDaemonsetSpec defines the spec for customizing the ZTunnel Daemonset. + +*Appears in:* + +- [ZTunnelDaemonset](#ztunneldaemonset) + +| Field | Description | +| -------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ | +| `template` *[ZTunnelDaemonsetSpecTemplate](#ztunneldaemonsetspectemplate)* | (Optional) Template allows users to specify custom fields for the ZTunnel Daemonset. | + +### ZTunnelDaemonsetSpecTemplate[​](#ztunneldaemonsetspectemplate) + +ZTunnelDaemonsetSpecTemplate defines the template for customizing the ZTunnel Daemonset. + +*Appears in:* + +- [ZTunnelDaemonsetSpec](#ztunneldaemonsetspec) + +| Field | Description | +| ------------------------------------------------------------ | -------------------------------------------------------------------------------- | +| `spec` *[ZTunnelDaemonsetPodSpec](#ztunneldaemonsetpodspec)* | (Optional) Spec allows users to specify custom fields for the ZTunnel Daemonset. | + ### Helm installation reference You can customize the following resources and settings during Calico Helm-based installation using the file, `values.yaml`. @@ -55751,21 +56054,21 @@ The full list of parameters which can be set is as follows. **Tab: Configuration file** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `BPFRedirectToPeer` | -| Description | Controls whether traffic may be forwarded directly to the peer side of a workload’s device. Note that the legacy "L2Only" option is now deprecated and if set it is treated like "Enabled". Setting this option to "Enabled" allows direct redirection (including from L3 host devices such as IPIP tunnels or WireGuard), which can improve redirection performance but causes the redirected packets to bypass the host‑side ingress path. As a result, packet‑capture tools on the host side of the workload device (for example, tcpdump) will not see that traffic. | -| Schema | One of: `Disabled`, `Enabled`, `L2Only` (case insensitive) | -| Default | `Enabled` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `BPFRedirectToPeer` | +| Description | Controls whether traffic may be forwarded directly to the peer side of a workload’s device. Note that the legacy "L2Only" option is now deprecated and if set it is treated like "Enabled. Setting this option to "Enabled" allows direct redirection (including from L3 host devices such as IPIP tunnels or WireGuard), which can improve redirection performance but causes the redirected packets to bypass the host‑side ingress path. As a result, packet‑capture tools on the host side of the workload device (for example, tcpdump) will not see that traffic. | +| Schema | One of: `Disabled`, `Enabled`, `L2Only` (case insensitive) | +| Default | `Enabled` | **Tab: Environment variable** -| Attribute | Value | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Key | `FELIX_BPFREDIRECTTOPEER` | -| Description | Controls whether traffic may be forwarded directly to the peer side of a workload’s device. Note that the legacy "L2Only" option is now deprecated and if set it is treated like "Enabled". Setting this option to "Enabled" allows direct redirection (including from L3 host devices such as IPIP tunnels or WireGuard), which can improve redirection performance but causes the redirected packets to bypass the host‑side ingress path. As a result, packet‑capture tools on the host side of the workload device (for example, tcpdump) will not see that traffic. | -| Schema | One of: `Disabled`, `Enabled`, `L2Only` (case insensitive) | -| Default | `Enabled` | +| Attribute | Value | +| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Key | `FELIX_BPFREDIRECTTOPEER` | +| Description | Controls whether traffic may be forwarded directly to the peer side of a workload’s device. Note that the legacy "L2Only" option is now deprecated and if set it is treated like "Enabled. Setting this option to "Enabled" allows direct redirection (including from L3 host devices such as IPIP tunnels or WireGuard), which can improve redirection performance but causes the redirected packets to bypass the host‑side ingress path. As a result, packet‑capture tools on the host side of the workload device (for example, tcpdump) will not see that traffic. | +| Schema | One of: `Disabled`, `Enabled`, `L2Only` (case insensitive) | +| Default | `Enabled` | diff --git a/static/calico/llms.txt b/static/calico/llms.txt index 1c429be662..bc66374af0 100644 --- a/static/calico/llms.txt +++ b/static/calico/llms.txt @@ -18,65 +18,65 @@ ## Installing and upgrading -- [Install Calico](https://docs.tigera.io/calico/latest/getting-started/): Install Calico on nodes and hosts for popular orchestrators, and install the calicoctl command line interface (CLI) tool. -- [Calico quickstart guide](https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart): Quickstart for Calico. -- [System requirements for Kubernetes](https://docs.tigera.io/calico/latest/getting-started/kubernetes/requirements): Review requirements before installing Calico to ensure success. -- [Community-tested Kubernetes versions](https://docs.tigera.io/calico/latest/getting-started/kubernetes/community-tested): Provides community inputs on what versions of Kubernetes and platforms work with Calico. -- [Amazon Elastic Kubernetes Service (EKS)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/eks): Enable Calico network policy in EKS. -- [Install Calico network policy on a Google Kubernetes Engine cluster](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/gke): Enable Calico network policy in GKE. -- [IBM Cloud Kubernetes Service (IKS)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/iks): Use IKS with built-in support for Calico networking and network policy. -- [Microsoft Azure Kubernetes Service (AKS)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/aks): Enable Calico network policy in AKS. -- [Migrate from Azure-managed Calico to self-managed Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/aks-migrate): Switch AKS clusters between Azure-managed and self-managed Calico -- [Self-managed Kubernetes in Amazon Web Services (AWS)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/aws): Use Calico with a self-managed Kubernetes cluster in Amazon Web Services (AWS). -- [Self-managed Kubernetes in Google Compute Engine (GCE)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/gce): Use Calico with a self-managed Kubernetes cluster in Google Compute Engine (GCE). -- [Self-managed Kubernetes in Microsoft Azure](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/azure): Use Calico with a self-managed Kubernetes cluster in Microsoft Azure. -- [Self-managed Kubernetes in DigitalOcean (DO)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/do): Use Calico with a self-managed Kubernetes cluster in DigitalOcean (DO). -- [Install Calico networking and network policy for on-premises deployments](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises): Install Calico networking and network policy for on-premises deployments. -- [Customize Calico configuration](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/config-options): Optionally customize Calico prior to installation. -- [Non-cluster hosts](https://docs.tigera.io/calico/latest/getting-started/bare-metal/): Install Calico on hosts to secure host communications. -- [About non-cluster hosts](https://docs.tigera.io/calico/latest/getting-started/bare-metal/about): Install Calico on hosts not in a cluster with network policy, or networking and network policy. -- [System requirements for Kubernetes](https://docs.tigera.io/calico/latest/getting-started/bare-metal/requirements): Review node requirements for installing Calico. -- [Install on non-cluster hosts](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/): Install Calico on hosts to secure host communications. -- [Docker container install](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/container): Install Calico on non-cluster hosts using a Docker container. -- [Binary install with package manager](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary-mgr): Install Calico on non-cluster host using a package manager. -- [Binary install without package manager](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary): Install Calico binary on non-cluster hosts without a package manager. -- [OpenShift](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/): Install Calico on OpenShift for networking and network policy. -- [System requirements for OpenShift](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/requirements): Review the requirements for using OpenShift with Calico. -- [Install an OpenShift 4 cluster with Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/installation): Install Calico on an OpenShift 4 cluster. -- [Install Calico on an OpenShift HCP cluster](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/hostedcontrolplanes): Install Calico on an OpenShift Hosted Control Planes (HCP) cluster. -- [Migrate from OVN-Kubernetes CNI to Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/ovn-to-calico): Migrate from OVN Kubernetes CNI to Calico -- [Rancher Kubernetes Engine (RKE)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/rancher): Install Calico on a Rancher Kubernetes Engine cluster. -- [Flannel](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/): Use Calico network policy on top of flannel networking. -- [Install Calico for policy and flannel (aka Canal) for networking](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/install-for-flannel): If you use flannel for networking, you can install Calico network policy to secure cluster communications. -- [Migrate a Kubernetes cluster from flannel/Canal to Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/migration-from-flannel): Preserve your existing VXLAN networking in Calico, but take full advantage of Calico IP address management (IPAM) and advanced network policy features. -- [Calico for Windows](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/): Install and configure Calico for Windows. -- [Limitations and known issues](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/limitations): Review limitations before starting installation. -- [Requirements](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/requirements): Review the requirements for Calico for Windows. -- [Install using Operator](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/operator): Install Calico for Windows on a Kubernetes cluster for testing or development. -- [Calico for Windows on a Rancher Kubernetes Engine cluster](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/rancher): Install Calico for Windows on a Rancher RKE cluster. -- [Basic policy demo](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/demo): An interactive demo to show how to apply basic network policy to pods in a Calico for Windows cluster. -- [Troubleshoot Calico for Windows](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/troubleshoot): Help for troubleshooting Calico for Windows issues in Calico this release. -- [K3s](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/): Get Calico up and running in your K3s cluster. -- [Quickstart for Calico on K3s](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/quickstart): Install Calico on a single-node K3s cluster for testing or development in under 5 minutes. -- [K3s multi-node install](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/multi-node-install): Install Calico on a multi node K3s cluster for testing or development. -- [Install using Helm](https://docs.tigera.io/calico/latest/getting-started/kubernetes/helm): Install Calico on a Kubernetes cluster using Helm 3. -- [Install Calico on a single-host Kubernetes cluster](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k8s-single-node): Install Calico on a single-host Kubernetes cluster for testing or development in under 15 minutes. -- [Quickstart for Calico on MicroK8s](https://docs.tigera.io/calico/latest/getting-started/kubernetes/microk8s): Install Calico on a single-host MicroK8s cluster for testing or development in under 5 minutes. -- [Quickstart for Calico on minikube](https://docs.tigera.io/calico/latest/getting-started/kubernetes/minikube): Enable Calico on a single/multi-node minikube cluster for testing or development in under 1 minute. -- [Kind multi-node install](https://docs.tigera.io/calico/latest/getting-started/kubernetes/kind): Enable Calico on a single/multi-node Kind cluster for testing or development in approximately 10 minutes. -- [Calico the hard way](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/): Up for the challenge? Calico the hard way takes you under the covers of an end-to-end Calico installation. -- [Calico the hard way](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/overview): A tutorial for installing Calico the hard way. -- [Stand up Kubernetes](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/standing-up-kubernetes): Get a Kubernetes cluster up and running. -- [The Calico datastore](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/the-calico-datastore): The central datastore for your clusters' operational and configuration state. -- [Configure IP pools](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/configure-ip-pools): Quick review of defining IP pools (IP address ranges) in clusters. -- [Install CNI plugin](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-cni-plugin): Steps to install the Calico Container Network Interface (CNI) -- [Install Typha](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-typha): Learn about Typha for scaling deployment. -- [Install calico/node](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-node): Configure and install calico/node as a daemon set. -- [Configure BGP peering](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/configure-bgp-peering): Quick review of BGP peering options. -- [Test networking](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/test-networking): Test that networking works correctly. -- [Test network policy](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/test-network-policy): Verify that network policy works correctly. -- [End user RBAC](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/end-user-rbac): Quick review of common roles and access controls for running clusters in production. -- [Istio integration](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/istio-integration): Enforce Calico network policy for Istio service mesh applications. +- [Install Calico](https://docs.tigera.io/calico/latest/getting-started/): Install Calico Open Source on Kubernetes, OpenShift, OpenStack, or bare-metal hosts. Includes guidance on installing the calicoctl command-line tool. +- [Calico quickstart guide](https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart): Install Calico Open Source on a single-host Kubernetes cluster in roughly 15 minutes — the standard starter path for trying Calico networking and network policy on a development machine. +- [System requirements for Kubernetes](https://docs.tigera.io/calico/latest/getting-started/kubernetes/requirements): Cluster, kernel, and platform requirements you must meet before installing Calico Open Source on Kubernetes. +- [Community-tested Kubernetes versions](https://docs.tigera.io/calico/latest/getting-started/kubernetes/community-tested): Community-reported compatibility data for Calico Open Source across Kubernetes versions, distributions, and host platforms. +- [Amazon Elastic Kubernetes Service (EKS)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/eks): Add Calico Open Source network policy to an Amazon EKS cluster running the AWS VPC CNI, without replacing the cluster's networking data plane. +- [Install Calico network policy on a Google Kubernetes Engine cluster](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/gke): Add Calico Open Source network policy to a Google Kubernetes Engine (GKE) cluster on top of the built-in GKE networking. +- [IBM Cloud Kubernetes Service (IKS)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/iks): IBM Cloud Kubernetes Service (IKS) ships with Calico Open Source as the built-in networking and policy engine — what is included and how to use it. +- [Microsoft Azure Kubernetes Service (AKS)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/aks): Add Calico Open Source network policy to an Azure Kubernetes Service (AKS) cluster running the Azure CNI. +- [Migrate from Azure-managed Calico to self-managed Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/aks-migrate): Switch an AKS cluster between the Azure-managed Calico add-on and a self-managed Calico Open Source installation. +- [Self-managed Kubernetes in Amazon Web Services (AWS)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/aws): Run Calico Open Source on a self-managed Kubernetes cluster in Amazon Web Services (AWS) — what to know about VPC sizing, MTU, and source/dest checks. +- [Self-managed Kubernetes in Google Compute Engine (GCE)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/gce): Run Calico Open Source on a self-managed Kubernetes cluster in Google Compute Engine (GCE) — what to know about IP forwarding, MTU, and route limits. +- [Self-managed Kubernetes in Microsoft Azure](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/azure): Run Calico Open Source on a self-managed Kubernetes cluster in Microsoft Azure — what to know about VNet routing, UDR limits, and IPAM choices. +- [Self-managed Kubernetes in DigitalOcean (DO)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-public-cloud/do): Run Calico Open Source on a self-managed Kubernetes cluster in DigitalOcean — what to know about MTU, droplet networking, and floating IPs. +- [Install Calico networking and network policy for on-premises deployments](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises): Install Calico Open Source networking and network policy on a self-managed Kubernetes cluster running on-premises hardware. +- [Customize Calico configuration](https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/config-options): Customize a Calico Open Source on-premises installation before applying it — IP pools, BGP, MTU, and other Installation resource fields. +- [Non-cluster hosts](https://docs.tigera.io/calico/latest/getting-started/bare-metal/): Install Calico Open Source on bare-metal hosts outside a Kubernetes cluster — choose between policy-only, networking-only, or full installation paths. +- [About non-cluster hosts](https://docs.tigera.io/calico/latest/getting-started/bare-metal/about): Install Calico Open Source on non-cluster hosts and VMs — pick between policy-only and networking-and-policy modes for protecting hosts outside Kubernetes. +- [System requirements for Kubernetes](https://docs.tigera.io/calico/latest/getting-started/bare-metal/requirements): Operating system, kernel, and connectivity requirements for installing Calico Open Source on a non-cluster host. +- [Install on non-cluster hosts](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/): Choose an installation method for Calico Open Source on a bare-metal host — package manager, raw binary, or Docker container. +- [Docker container install](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/container): Run the Calico Open Source agent on a non-cluster host inside a Docker container. +- [Binary install with package manager](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary-mgr): Install the Calico Open Source binary on a non-cluster host using a Linux package manager such as apt or yum. +- [Binary install without package manager](https://docs.tigera.io/calico/latest/getting-started/bare-metal/installation/binary): Install the Calico Open Source binary directly on a non-cluster host without using a package manager. +- [OpenShift](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/): Install Calico Open Source on OpenShift 4 for cluster networking and network policy, replacing the default OVN-Kubernetes data plane. +- [System requirements for OpenShift](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/requirements): Cluster, OpenShift, and host OS requirements you must meet before installing Calico Open Source on an OpenShift 4 cluster. +- [Install an OpenShift 4 cluster with Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/installation): Install Calico Open Source on a self-managed OpenShift 4 cluster using the operator-based installation flow. +- [Install Calico on an OpenShift HCP cluster](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/hostedcontrolplanes): Install Calico Open Source on an OpenShift Hosted Control Planes (HCP) cluster, where the control plane is managed and the data plane runs on user-owned nodes. +- [Migrate from OVN-Kubernetes CNI to Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/openshift/ovn-to-calico): Migrate an OpenShift 4 cluster from the OVN-Kubernetes CNI to Calico Open Source as the cluster networking provider. +- [Rancher Kubernetes Engine (RKE)](https://docs.tigera.io/calico/latest/getting-started/kubernetes/rancher): Install Calico Open Source on a Rancher Kubernetes Engine cluster. +- [Flannel](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/): Run Calico Open Source policy enforcement on a cluster that uses Flannel for the networking data plane. +- [Install Calico for policy and flannel (aka Canal) for networking](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/install-for-flannel): Install Calico Open Source network policy on an existing Flannel-networked cluster without replacing the data plane. +- [Migrate a Kubernetes cluster from flannel/Canal to Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/migration-from-flannel): Migrate from Flannel to Calico Open Source while preserving the existing VXLAN data plane, gaining Calico IPAM and advanced policy. +- [Calico for Windows](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/): Install and configure Calico Open Source for Windows — covers requirements, supported platforms, and the install paths for Windows nodes. +- [Limitations and known issues](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/limitations): Known limitations of Calico Open Source for Windows that you should review before planning an installation. +- [Requirements](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/requirements): Cluster and Windows host requirements you must meet before installing Calico Open Source for Windows. +- [Install using Operator](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/operator): Install Calico Open Source for Windows on a Kubernetes cluster using the operator, for testing or development. +- [Calico for Windows on a Rancher Kubernetes Engine cluster](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/rancher): Install Calico Open Source for Windows on a Rancher RKE cluster with Windows worker nodes. +- [Basic policy demo](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/demo): Interactive demo that applies basic Calico Open Source network policy to pods running on a Windows node. +- [Troubleshoot Calico for Windows](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/troubleshoot): Troubleshooting guide for Calico Open Source for Windows clusters — common issues, diagnostic steps, and where to look for logs. +- [K3s](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/): Install Calico Open Source on a K3s cluster — covers single-node and multi-node configurations. +- [Quickstart for Calico on K3s](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/quickstart): Quickstart that installs Calico Open Source on a single-node K3s cluster in roughly 5 minutes for testing or development. +- [K3s multi-node install](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/multi-node-install): Install Calico Open Source on a multi-node K3s cluster for testing or development workloads. +- [Install using Helm](https://docs.tigera.io/calico/latest/getting-started/kubernetes/helm): Install Calico Open Source on a Kubernetes cluster using a Helm 3 chart. +- [Install Calico on a single-host Kubernetes cluster](https://docs.tigera.io/calico/latest/getting-started/kubernetes/k8s-single-node): Install Calico Open Source on a single-host Kubernetes cluster for testing or development in roughly 15 minutes. +- [Quickstart for Calico on MicroK8s](https://docs.tigera.io/calico/latest/getting-started/kubernetes/microk8s): Install Calico Open Source on a single-host MicroK8s cluster for testing or development in roughly 5 minutes. +- [Quickstart for Calico on minikube](https://docs.tigera.io/calico/latest/getting-started/kubernetes/minikube): Install Calico Open Source on a single- or multi-node minikube cluster for testing or development in roughly 1 minute. +- [Kind multi-node install](https://docs.tigera.io/calico/latest/getting-started/kubernetes/kind): Install Calico Open Source on a single- or multi-node Kind cluster for testing or development in roughly 10 minutes. +- [Calico the hard way](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/): Calico the hard way — install every Calico Open Source component manually to understand how the pieces fit together end to end. +- [Calico the hard way](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/overview): Step-by-step tutorial intro for Calico the hard way — the cluster you build with Calico Open Source, the components installed by hand, and prerequisites. +- [Stand up Kubernetes](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/standing-up-kubernetes): Calico the hard way — stand up a minimal Kubernetes cluster ready to receive a manual Calico Open Source installation. +- [The Calico datastore](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/the-calico-datastore): Calico the hard way — choose between the Kubernetes API datastore and etcd for the Calico Open Source operational and configuration store. +- [Configure IP pools](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/configure-ip-pools): Calico the hard way — define IP pools that govern which address ranges Calico Open Source assigns to pods and services. +- [Install CNI plugin](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-cni-plugin): Calico the hard way — install the Calico Open Source CNI plugin on each node and wire it into kubelet. +- [Install Typha](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-typha): Calico the hard way — install Typha to fan out datastore reads so Calico Open Source can scale to large clusters. +- [Install calico/node](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-node): Calico the hard way — deploy calico/node as a DaemonSet so the Calico Open Source agent runs on every cluster node. +- [Configure BGP peering](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/configure-bgp-peering): Calico the hard way — configure BGP peering between Calico Open Source nodes and review the available peering topologies. +- [Test networking](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/test-networking): Calico the hard way — verify pod-to-pod connectivity and routing on a cluster after the manual Calico Open Source build-out. +- [Test network policy](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/test-network-policy): Calico the hard way — verify that Calico Open Source network policy enforcement is working after the manual install. +- [End user RBAC](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/end-user-rbac): Calico the hard way — RBAC roles and access controls that govern who can edit Calico Open Source resources in a production cluster. +- [Istio integration](https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/istio-integration): Calico the hard way — extend Calico Open Source policy enforcement into Istio service-mesh sidecars for layer-7 traffic. - [Upgrade](https://docs.tigera.io/calico/latest/operations/upgrading/): Upgrade to a newer version of Calico. - [Upgrade Calico on Kubernetes](https://docs.tigera.io/calico/latest/operations/upgrading/kubernetes-upgrade): Upgrade to a newer version of Calico for Kubernetes. - [Upgrade Calico on OpenShift 4](https://docs.tigera.io/calico/latest/operations/upgrading/openshift-upgrade): Upgrade to a newer version of Calico for OpenShift. @@ -86,21 +86,21 @@ - [Enabling the eBPF data plane](https://docs.tigera.io/calico/latest/operations/ebpf/enabling-ebpf): Step-by-step instructions for enabling the eBPF data plane. - [Install in eBPF mode](https://docs.tigera.io/calico/latest/operations/ebpf/install): Install Calico in eBPF mode. - [Troubleshoot eBPF mode](https://docs.tigera.io/calico/latest/operations/ebpf/troubleshoot-ebpf): How to troubleshoot when running in eBPF mode. -- [Calico nftables data plane](https://docs.tigera.io/calico/latest/getting-started/kubernetes/nftables): Install Calico using the nftables data plane. -- [VPP data plane](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/): Install the VPP userspace data plane to unlock extra performance for your cluster! -- [Get started with VPP networking](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/getting-started): Install Calico with the VPP data plane on a Kubernetes cluster. -- [IPsec configuration with VPP](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/ipsec): Enable IPsec for faster encryption between nodes when using the VPP data plane. -- [Details of VPP implementation & known-issues](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/specifics): Behavioral discrepancies when running with the Calico/VPP data plane -- [Install an OpenShift 4 cluster with Calico VPP](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/openshift): Install Calico VPP on an OpenShift 4 cluster. -- [OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/): Install Calico networking and network policy for OpenStack. -- [Calico for OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/overview): Review the Calico components used in an OpenStack deployment. -- [System requirements for OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/requirements): Requirements for installing Calico on OpenStack nodes. -- [Install Calico on OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/): Install Calico on OpenStack -- [Calico on OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/overview): Choose a method for installing Calico for OpenStack. -- [Ubuntu](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/ubuntu): Install Calico on OpenStack, Ubuntu nodes. -- [Red Hat Enterprise Linux](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/redhat): Install Calico on OpenStack, Red Hat Enterprise Linux nodes. -- [DevStack](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/devstack): Quickstart to show connectivity between DevStack and Calico. -- [Verify your deployment](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/verification): Quick steps to test that your Calico-based OpenStack deployment is running correctly. +- [Calico nftables data plane](https://docs.tigera.io/calico/latest/getting-started/kubernetes/nftables): Install Calico Open Source with the nftables data plane instead of the default iptables back end. +- [VPP data plane](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/): Install Calico Open Source with the VPP data plane — high-throughput userspace networking for clusters that need more than iptables or eBPF. +- [Get started with VPP networking](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/getting-started): Install Calico Open Source with the VPP userspace data plane on a Kubernetes cluster. +- [IPsec configuration with VPP](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/ipsec): Configure IPsec encryption between nodes for Calico Open Source clusters running the VPP data plane. +- [Details of VPP implementation & known-issues](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/specifics): Behavioral differences to expect when running Calico Open Source with the VPP data plane instead of iptables or eBPF. +- [Install an OpenShift 4 cluster with Calico VPP](https://docs.tigera.io/calico/latest/getting-started/kubernetes/vpp/openshift): Install Calico Open Source with the VPP data plane on an OpenShift 4 cluster. +- [OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/): Install Calico Open Source networking and network policy for OpenStack — covers Neutron integration, supported distributions, and the OpenStack-specific install path. +- [Calico for OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/overview): Components and topology used when running Calico Open Source as the networking and policy layer for an OpenStack deployment. +- [System requirements for OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/requirements): Hypervisor, OS, and OpenStack requirements you must meet before installing Calico Open Source on OpenStack nodes. +- [Install Calico on OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/): Install Calico Open Source on an OpenStack deployment — choose a supported Linux distribution and follow the per-distribution path. +- [Calico on OpenStack](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/overview): Pick an installation method for Calico Open Source on OpenStack — DevStack for evaluation, or a per-distribution path for production. +- [Ubuntu](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/ubuntu): Install Calico Open Source on an OpenStack deployment running Ubuntu compute nodes. +- [Red Hat Enterprise Linux](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/redhat): Install Calico Open Source on an OpenStack deployment running Red Hat Enterprise Linux compute nodes. +- [DevStack](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/devstack): Quickstart that wires Calico Open Source into a DevStack OpenStack environment to verify connectivity and policy. +- [Verify your deployment](https://docs.tigera.io/calico/latest/getting-started/openstack/installation/verification): Verification steps that confirm a Calico Open Source OpenStack deployment is forwarding traffic and applying policy correctly. ## Networking @@ -157,53 +157,53 @@ ## Network policy -- [Network policy](https://docs.tigera.io/calico/latest/network-policy/): Calico Network Policy and Calico Global Network Policy are the fundamental resources to secure workloads and hosts, and to adopt a zero trust security model. -- [Adopt a zero trust network model for security](https://docs.tigera.io/calico/latest/network-policy/adopt-zero-trust): Best practices to adopt a zero trust network model to secure workloads and hosts. Learn 5 key requirements to control network access for cloud-native strategy. -- [Get started with policy](https://docs.tigera.io/calico/latest/network-policy/get-started/): If you are new to Kubernetes, start with "Kubernetes policy" and learn the basics of enforcing policy for pod traffic. Otherwise, dive in and create more powerful policies with Calico policy. The good news is, Kubernetes and Calico policies are very similar and work alongside each other -- so managing both types is easy. -- [Calico policy](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/): Calico network policy lets you secure both workloads and hosts. -- [Get started with Calico network policy](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-network-policy): Create your first Calico network policies. Shows the rich features using sample policies that extend native Kubernetes network policy. -- [Calico automatic labels](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-labels): Calico automatic labels for use with resources. -- [Get started with Calico network policy for OpenStack](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/network-policy-openstack): Extend OpenStack security groups by applying Calico network policy and using labels to identify VMs within network policy rules. -- [Calico policy tutorial](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-policy-tutorial): Learn how to create more advanced Calico network policies (namespace, allow and deny all ingress and egress). -- [Kubernetes policy](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/): Manage your Kubernetes network policies right alongside the more powerful Calico network policies. -- [Get started with Kubernetes network policy](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-network-policy): Learn Kubernetes policy syntax, rules, and features for controlling network traffic. -- [Kubernetes policy, demo](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-demo): An interactive demo that visually shows how applying Kubernetes policy allows and denies connections. -- [Kubernetes policy, basic tutorial](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-basic): Learn how to use basic Kubernetes network policy to securely restrict traffic to/from pods. -- [Kubernetes policy, advanced tutorial](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-advanced): Learn how to create more advanced Kubernetes network policies (namespace, allow and deny all ingress and egress). -- [Enable a default deny policy for Kubernetes pods](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-default-deny): Create a default deny network policy so pods that are missing policy are not allowed traffic until appropriate network policy is defined. -- [Policy tiers](https://docs.tigera.io/calico/latest/network-policy/policy-tiers/): Learn how policy tiers allow diverse teams to securely manage Kubernetes policy. -- [Get started with policy tiers](https://docs.tigera.io/calico/latest/network-policy/policy-tiers/tiered-policy): Understand how tiered policy works and supports microsegmentation. -- [Configure RBAC for tiered policies](https://docs.tigera.io/calico/latest/network-policy/policy-tiers/rbac-tiered-policies): Configure RBAC to control access to policies and tiers. -- [Stage, preview impacts, and enforce policy](https://docs.tigera.io/calico/latest/network-policy/staged-network-policies): Stage and preview policies to observe traffic implications before enforcing them. -- [Policy rules](https://docs.tigera.io/calico/latest/network-policy/policy-rules/): Control traffic to/from endpoints using Calico network policy rules. -- [Basic rules](https://docs.tigera.io/calico/latest/network-policy/policy-rules/policy-rules-overview): Define network connectivity for Calico endpoints using policy rules and label selectors. -- [Use namespace rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/namespace-policy): Use namespaces and namespace selectors in Calico network policy to group or separate resources. Use network policies to allow or deny traffic to/from pods that belong to specific namespaces. -- [Use service rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-policy): Use Kubernetes Service names in policy rules. -- [Use service accounts rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-accounts): Use Kubernetes service accounts in policies to validate cryptographic identities and/or manage RBAC controlled high-priority rules across teams. -- [Use external IPs or networks rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/external-ips-policy): Limit egress and ingress traffic using IP address either directly within Calico network policy or managed as Calico network sets. -- [Use ICMP/ping rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/icmp-ping): Control where ICMP/ping is used by creating a Calico network policy to allow and deny ICMP/ping messages for workloads and host endpoints. -- [Use log rules to test network policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/log-rules): Debug your policies with Log rules. -- [Policy for hosts and VMs](https://docs.tigera.io/calico/latest/network-policy/hosts/): Use the same Calico network policy for workloads to restrict traffic between hosts and the outside world. -- [Protect hosts and VMs](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts): Calico network policy not only protects workloads, but also hosts. Create a Calico network policies to restrict traffic to/from hosts. -- [Protect Kubernetes nodes](https://docs.tigera.io/calico/latest/network-policy/hosts/kubernetes-nodes): Protect Kubernetes nodes with host endpoints managed by Calico. -- [Protect hosts tutorial](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts-tutorial): Learn how to secure incoming traffic from outside the cluster using Calico host endpoints with network policy, including allowing controlled access to specific Kubernetes services. -- [Apply policy to forwarded traffic](https://docs.tigera.io/calico/latest/network-policy/hosts/host-forwarded-traffic): Apply Calico network policy to traffic being forward by hosts acting as routers or NAT gateways. -- [Policy for Kubernetes services](https://docs.tigera.io/calico/latest/network-policy/services/): Apply Calico policy to Kubernetes node ports, and to services that are exposed externally as cluster IPs. -- [Apply Calico policy to Kubernetes node ports](https://docs.tigera.io/calico/latest/network-policy/services/kubernetes-node-ports): Restrict access to Kubernetes node ports using Calico global network policy. Follow the steps to secure the host, the node ports, and the cluster. -- [Apply Calico policy to services exposed externally as cluster IPs](https://docs.tigera.io/calico/latest/network-policy/services/services-cluster-ips): Expose Kubernetes service cluster IPs over BGP using Calico, and restrict who can access them using Calico network policy. -- [Policy for Istio](https://docs.tigera.io/calico/latest/network-policy/istio/): Configure the Calico "application layer policy" with application layer-specific attributes for Istio service mesh. -- [Enforce Calico network policy for Istio service mesh](https://docs.tigera.io/calico/latest/network-policy/istio/app-layer-policy): Enforce network policy for Istio service mesh including matching on HTTP methods and paths. -- [Use HTTP methods and paths in policy rules](https://docs.tigera.io/calico/latest/network-policy/istio/http-methods): Create a Calico network policy for Istio-enabled apps to restrict ingress traffic matching HTTP methods or paths. -- [Enforce Calico network policy using Istio (tutorial)](https://docs.tigera.io/calico/latest/network-policy/istio/enforce-policy-istio): Learn how Calico integrates with Istio to provide fine-grained access control using Calico network policies enforced within the service mesh and network layer. -- [Policy for extreme traffic](https://docs.tigera.io/calico/latest/network-policy/extreme-traffic/): Use Calico network policy early in the Linux packet processing pipeline to handle extreme traffic scenarios. -- [Enable extreme high-connection workloads](https://docs.tigera.io/calico/latest/network-policy/extreme-traffic/high-connection-workloads): Create a Calico network policy rule to bypass Linux conntrack for traffic to workloads that experience extremely large number of connections. -- [Defend against DoS attacks](https://docs.tigera.io/calico/latest/network-policy/extreme-traffic/defend-dos-attack): Define DoS mitigation rules in Calico policy to quickly drop connections when under attack. Learn how rules use eBPF and XDP, including hardware offload when available. -- [Encrypt in-cluster pod traffic](https://docs.tigera.io/calico/latest/network-policy/encrypt-cluster-pod-traffic): Enable WireGuard for state-of-the-art cryptographic security between pods for Calico clusters. -- [Secure Calico component communications](https://docs.tigera.io/calico/latest/network-policy/comms/): Secure communications for Calico components. -- [Configure encryption and authentication to secure Calico components](https://docs.tigera.io/calico/latest/network-policy/comms/crypto-auth): Enable TLS authentication and encryption for various Calico components. -- [Schedule Typha for scaling to well-known nodes](https://docs.tigera.io/calico/latest/network-policy/comms/reduce-nodes): Configure the Calico Typha TCP port. -- [Secure Calico Prometheus endpoints](https://docs.tigera.io/calico/latest/network-policy/comms/secure-metrics): Limit access to Calico metric endpoints using network policy. -- [Secure BGP sessions](https://docs.tigera.io/calico/latest/network-policy/comms/secure-bgp): Configure BGP passwords to prevent attackers from injecting false routing information. +- [Network policy](https://docs.tigera.io/calico/latest/network-policy/): Secure Kubernetes workloads and hosts with Calico Open Source network policy — Calico NetworkPolicy and GlobalNetworkPolicy resources for adopting a zero-trust model. +- [Adopt a zero trust network model for security](https://docs.tigera.io/calico/latest/network-policy/adopt-zero-trust): Adopt a zero-trust network model for Kubernetes workloads and hosts using Calico Open Source — five requirements for controlling network access in cloud-native environments. +- [Get started with policy](https://docs.tigera.io/calico/latest/network-policy/get-started/): Pick a path for getting started with Calico Open Source policy — Kubernetes-native NetworkPolicy basics, or the richer Calico-specific resources that work alongside it. +- [Calico policy](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/): Use Calico Open Source policy resources to secure both workloads and hosts beyond what Kubernetes NetworkPolicy alone supports. +- [Get started with Calico network policy](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-network-policy): Write your first Calico Open Source NetworkPolicy — sample policies that exercise the rich rule features that extend Kubernetes NetworkPolicy. +- [Calico automatic labels](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-labels): Reference list of automatic labels Calico Open Source attaches to resources, useful as selectors in policy rules. +- [Get started with Calico network policy for OpenStack](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/network-policy-openstack): Extend OpenStack security groups with Calico Open Source network policy and label-based rules for VMs running on OpenStack. +- [Calico policy tutorial](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-policy-tutorial): Step-by-step tutorial for advanced Calico Open Source policy patterns — namespace scoping, allow-all, deny-all, and ingress and egress controls. +- [Kubernetes policy](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/): Manage Kubernetes NetworkPolicy resources alongside the more powerful Calico Open Source policy resources in the same cluster. +- [Get started with Kubernetes network policy](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-network-policy): Reference for Kubernetes NetworkPolicy syntax, rules, and features when used with the Calico Open Source enforcement engine. +- [Kubernetes policy, demo](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-demo): Interactive demo for a Calico Open Source cluster that visualizes how Kubernetes NetworkPolicy allows and denies connections between pods. +- [Kubernetes policy, basic tutorial](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-basic): Apply your first Kubernetes NetworkPolicy in a Calico Open Source cluster to restrict ingress and egress traffic to and from pods. +- [Kubernetes policy, advanced tutorial](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-policy-advanced): Write more advanced Kubernetes NetworkPolicy resources in a Calico Open Source cluster — namespace scoping, allow-all, and deny-all variants. +- [Enable a default deny policy for Kubernetes pods](https://docs.tigera.io/calico/latest/network-policy/get-started/kubernetes-default-deny): Apply a default-deny network policy in a Calico Open Source cluster so unprotected pods are denied traffic until explicit policy is written. +- [Policy tiers](https://docs.tigera.io/calico/latest/network-policy/policy-tiers/): Use Calico Open Source policy tiers so platform, security, and app teams can author and order policy independently within a single Kubernetes cluster. +- [Get started with policy tiers](https://docs.tigera.io/calico/latest/network-policy/policy-tiers/tiered-policy): How tiered policy works in Calico Open Source — evaluation order, pass actions, and using tiers to support microsegmentation. +- [Configure RBAC for tiered policies](https://docs.tigera.io/calico/latest/network-policy/policy-tiers/rbac-tiered-policies): Set up Kubernetes RBAC to control which users can edit Calico Open Source policies and tiers in a multi-team cluster. +- [Stage, preview impacts, and enforce policy](https://docs.tigera.io/calico/latest/network-policy/staged-network-policies): Stage and preview Calico Open Source network policies to observe traffic implications before enforcing them in production. +- [Policy rules](https://docs.tigera.io/calico/latest/network-policy/policy-rules/): Control traffic to and from endpoints using Calico Open Source network policy rules — selectors, actions, and egress/ingress directions. +- [Basic rules](https://docs.tigera.io/calico/latest/network-policy/policy-rules/policy-rules-overview): How to write policy rules in Calico Open Source — label selectors, source and destination match criteria, and rule actions. +- [Use namespace rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/namespace-policy): Group or separate workloads in Calico Open Source policy using namespaces and namespace selectors so policies apply only to specified namespaces. +- [Use service rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-policy): Match on Kubernetes Service names in Calico Open Source policy rules instead of specific pod selectors. +- [Use service accounts rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/service-accounts): Match on Kubernetes service accounts in Calico Open Source policy rules to validate workload identity and apply RBAC-controlled rules. +- [Use external IPs or networks rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/external-ips-policy): Restrict egress and ingress to specific IP ranges in Calico Open Source policy, either inline or via reusable network sets. +- [Use ICMP/ping rules in policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/icmp-ping): Allow or deny ICMP and ping traffic for Calico Open Source workloads and host endpoints using policy rules. +- [Use log rules to test network policy](https://docs.tigera.io/calico/latest/network-policy/policy-rules/log-rules): Add Log actions to Calico Open Source policy rules to debug which rules are matching traffic at runtime. +- [Policy for hosts and VMs](https://docs.tigera.io/calico/latest/network-policy/hosts/): Apply Calico Open Source network policy to host interfaces — extending the same selector-based policy model from pods to bare-metal hosts and VMs. +- [Protect hosts and VMs](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts): Protect Kubernetes hosts and bare-metal nodes with Calico Open Source policy by writing rules that target host endpoints. +- [Protect Kubernetes nodes](https://docs.tigera.io/calico/latest/network-policy/hosts/kubernetes-nodes): Protect Kubernetes node interfaces with Calico Open Source host endpoints to extend network policy to the node itself. +- [Protect hosts tutorial](https://docs.tigera.io/calico/latest/network-policy/hosts/protect-hosts-tutorial): Tutorial for protecting hosts in a Calico Open Source cluster — register host endpoints, write rules, and allow controlled access to specific Kubernetes services. +- [Apply policy to forwarded traffic](https://docs.tigera.io/calico/latest/network-policy/hosts/host-forwarded-traffic): Apply Calico Open Source network policy to traffic forwarded through hosts acting as routers or NAT gateways. +- [Policy for Kubernetes services](https://docs.tigera.io/calico/latest/network-policy/services/): Apply Calico Open Source policy to Kubernetes Services — node ports, ClusterIPs, and externally exposed services. +- [Apply Calico policy to Kubernetes node ports](https://docs.tigera.io/calico/latest/network-policy/services/kubernetes-node-ports): Restrict access to Kubernetes NodePort services using Calico Open Source GlobalNetworkPolicy at the host endpoint. +- [Apply Calico policy to services exposed externally as cluster IPs](https://docs.tigera.io/calico/latest/network-policy/services/services-cluster-ips): Expose Kubernetes Service ClusterIPs over BGP using Calico Open Source and restrict who can reach them with network policy. +- [Policy for Istio](https://docs.tigera.io/calico/latest/network-policy/istio/): Configure Calico Open Source application-layer policy to apply Istio service mesh attributes (HTTP methods, paths) to Kubernetes traffic. +- [Enforce Calico network policy for Istio service mesh](https://docs.tigera.io/calico/latest/network-policy/istio/app-layer-policy): Apply Calico Open Source network policy to Istio service-mesh traffic, including matching on HTTP methods and paths. +- [Use HTTP methods and paths in policy rules](https://docs.tigera.io/calico/latest/network-policy/istio/http-methods): Restrict ingress traffic to Istio-enabled apps by matching HTTP methods or paths in a Calico Open Source network policy. +- [Enforce Calico network policy using Istio (tutorial)](https://docs.tigera.io/calico/latest/network-policy/istio/enforce-policy-istio): Use Calico Open Source with Istio to apply fine-grained access control at both the network layer and inside the service mesh. +- [Policy for extreme traffic](https://docs.tigera.io/calico/latest/network-policy/extreme-traffic/): Apply Calico Open Source network policy early in the Linux packet-processing pipeline to handle DoS, high-connection, and other extreme traffic scenarios. +- [Enable extreme high-connection workloads](https://docs.tigera.io/calico/latest/network-policy/extreme-traffic/high-connection-workloads): Bypass Linux conntrack with a Calico Open Source policy rule for workloads that handle an extreme number of concurrent connections. +- [Defend against DoS attacks](https://docs.tigera.io/calico/latest/network-policy/extreme-traffic/defend-dos-attack): Define DoS mitigation rules in Calico Open Source policy that drop connections at the eBPF or XDP layer, with hardware offload when available. +- [Encrypt in-cluster pod traffic](https://docs.tigera.io/calico/latest/network-policy/encrypt-cluster-pod-traffic): Turn on WireGuard encryption between pods on a Calico Open Source cluster for state-of-the-art cryptographic protection of in-cluster traffic. +- [Secure Calico component communications](https://docs.tigera.io/calico/latest/network-policy/comms/): Secure communications between Calico Open Source components — TLS, BGP authentication, and metric-endpoint access control. +- [Configure encryption and authentication to secure Calico components](https://docs.tigera.io/calico/latest/network-policy/comms/crypto-auth): Turn on TLS authentication and encryption between Calico Open Source components using a custom certificate authority. +- [Schedule Typha for scaling to well-known nodes](https://docs.tigera.io/calico/latest/network-policy/comms/reduce-nodes): Configure the TCP port used by Typha in a Calico Open Source cluster to reduce datastore load on large clusters. +- [Secure Calico Prometheus endpoints](https://docs.tigera.io/calico/latest/network-policy/comms/secure-metrics): Restrict access to Calico Open Source metric endpoints using network policy. +- [Secure BGP sessions](https://docs.tigera.io/calico/latest/network-policy/comms/secure-bgp): Configure BGP authentication passwords for Calico Open Source so attackers cannot inject false routing information. ## Observability diff --git a/static/llms.txt b/static/llms.txt index dc5fefc640..cc483679a2 100644 --- a/static/llms.txt +++ b/static/llms.txt @@ -4,16 +4,16 @@ ## Top Pages -- [Calico quickstart guide](https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart): Quickstart for Calico. -- [Quickstart for Calico Enterprise on Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart): Install Calico Enterprise on a single-host Kubernetes cluster for testing or development. -- [What happens when you connect a cluster to Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/connect-cluster): Get answers to your questions about connecting to Calico Cloud. +- [Calico quickstart guide](https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart): Install Calico Open Source on a single-host Kubernetes cluster in roughly 15 minutes — the standard starter path for trying Calico networking and network policy on a development machine. +- [Quickstart for Calico Enterprise on Kubernetes](https://docs.tigera.io/calico-enterprise/latest/getting-started/install-on-clusters/kubernetes/quickstart): Stand up Calico Enterprise on a single-host Kubernetes cluster in about an hour for testing, demos, or development — not intended for production. +- [What happens when you connect a cluster to Calico Cloud](https://docs.tigera.io/calico-cloud/get-started/connect-cluster): What happens when you connect a Kubernetes cluster to Calico Cloud — what is installed, what data leaves the cluster, and what changes in the cluster. - [Determine best networking option](https://docs.tigera.io/calico/latest/networking/determine-best-networking): Learn about the different networking options Calico supports so you can choose the best option for your needs. -- [Get started with Calico network policy](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-network-policy): Create your first Calico network policies. Shows the rich features using sample policies that extend native Kubernetes network policy. -- [Get started with policy tiers](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/tiered-policy): Understand how tiered policy works and supports microsegmentation. +- [Get started with Calico network policy](https://docs.tigera.io/calico/latest/network-policy/get-started/calico-policy/calico-network-policy): Write your first Calico Open Source NetworkPolicy — sample policies that exercise the rich rule features that extend Kubernetes NetworkPolicy. +- [Get started with policy tiers](https://docs.tigera.io/calico-enterprise/latest/network-policy/policy-tiers/tiered-policy): How tiered policy works in Calico Enterprise — evaluation order, pass actions, and using tiers to enforce microsegmentation across teams. - [Enabling the eBPF data plane](https://docs.tigera.io/calico/latest/operations/ebpf/enabling-ebpf): Step-by-step instructions for enabling the eBPF data plane. - [Observability and troubleshooting](https://docs.tigera.io/calico-enterprise/latest/observability/): Use Elasticsearch logs for visibility into all network traffic with Kubernetes context. - [Configure BGP peering](https://docs.tigera.io/calico/latest/networking/configuring/bgp): Configure BGP peering with full mesh, node-specific peering, ToR, and/or Calico route reflectors. -- [System requirements](https://docs.tigera.io/calico-cloud/get-started/system-requirements): Review cluster requirements to connect to Calico Cloud. +- [System requirements](https://docs.tigera.io/calico-cloud/get-started/system-requirements): Cluster, platform, and version requirements a Kubernetes cluster must meet before it can connect to Calico Cloud. ## Use Cases