Userplane
The Userplane allows for the deployment of the PowerDNS base products for DNS Loadbalancing, Recursive DNS, Authoritative DNS and accompanying reporting & security solutions.
Below example shows a Userplane deployment with several components combined:
Due to the flexibility of the Helm Charts you can deploy any number of instances of a product and tie them together to satisfy your functional usecases. This includes the option to deploy multiple instances of the same product with different configurations.
Components
Currently, the Userplane allows for the deployment of components which aid in the following areas:
- DNS Loadbalancing: DNS, DoS and abuse-aware loadbalancing which brings out the best possible performance in any DNS deployment
- Recursive DNS: High-performance, low latency DNS resolver
- Authoritative DNS: Versatile authoritative server with a management GUI and backend tailored for distributed cloud-native deployments
- Management: Management of Userplane components via Cloud Control API
- Reporting: Distribution, conversion and storage of streams of DNS traffic protobuf messages generated by dnsdist and recursor
- Security: Highly customizable filtering of DNS traffic in dnsdist and/or recursor
Helm Charts
Installation of the Userplane deployment can be done via the following Helm Charts:
Name | Location | Description |
---|---|---|
powerdns |
oci://registry.open-xchange.com/cloudcontrol/powerdns |
Manages a Userplane deployment |
powerdns-crds |
oci://registry.open-xchange.com/cloudcontrol/powerdns-crds |
Provides CRDs required for a Userplane deployment |
powerdns-operators |
oci://registry.open-xchange.com/cloudcontrol/powerdns-operators |
(Optional) Provides Operators for a Userplane deployment |
powerdns-crds
You only need to install the powerdns-crds
Helm Chart once in a cluster. The CRDs managed by this Helm Chart are cluster-scoped, meaning any number of Userplane deployments in the same cluster can utilize them. Whilst not required, we recommend installing the powerdns-crds
Helm Chart into a dedicated namespace.
powerdns-operators
The powerdns-operators
Helm Chart was originally made available to help easily deploy Postgres databases for use by Authoritative server and ZoneControl. You can still use it for this purpose, but we highly recommend using Lightning Stream with S3 for Authoritative server instead and a self-managed database for ZoneControl.
Installation
Installation of a Userplane deployment requires following the correct order. You must first deploy powerdns-crds
(only once per cluster), since powerdns
depends on some custom resource definitions managed by powerdns-crds
. Below steps will walk you through a basic installation of a Userplane deployment on a cluster which does not yet have powerdns-crds
available:
Before using the Helm Charts, make sure your helm
is authenticated against the OX registry:
helm registry login registry.open-xchange.com --username=<REGISTRY_USERNAME> --password=<REGISTRY_PASSWORD>
You can then either install directly from the OX registry:
helm install <RELEASE_CRDS> --namespace=<CRD NAMESPACE> oci://registry.open-xchange.com/cloudcontrol/powerdns-crds --version <VERSION>
helm install <RELEASE> --namespace=<NAMESPACE> oci://registry.open-xchange.com/cloudcontrol/powerdns --version <VERSION>
Or you can download the Helm Charts for offline usage:
helm pull oci://registry.open-xchange.com/cloudcontrol/powerdns-crds --version <VERSION>
helm pull oci://registry.open-xchange.com/cloudcontrol/powerdns --version <VERSION>
This should result in files named powerdns-crds-<VERSION>.tgz
and powerdns-<VERSION>.tgz
which you can use to install from:
helm install <RELEASE_CRDS> --namespace=<CRD NAMESPACE> powerdns-crds-<VERSION>.tgz
helm install <RELEASE> --namespace=<NAMESPACE> powerdns-<VERSION>.tgz
Since Cloud Control Helm Charts are modular and highly configurable, you need to define and configure the components which you would like to deploy. The definition and configuration should be passed into the helm install
command via the --values
argument. To install directly from the OX registry with a YAML file named myenvironment.yaml
you can use this command:
helm install <RELEASE> --namespace=<NAMESPACE> oci://registry.open-xchange.com/cloudcontrol/powerdns --version <VERSION> --values=myenvironment.yaml
Configuration Reference
Userplane uses the same concepts as Controlplane to define and configure components. To deploy a component, it must be defined by providing a key-value pair under the proper root node. For example to configure a set of instances of dnsdist and recursor:
You can also use this format to define multiple sets of instances of the same component, for example:
Parameters which can be used to configure Userplane are shown in the below table.
Parameter | Type | Default | Description |
---|---|---|---|
api |
API | Configuration options for Cloud Control API | |
auths |
Map of Auth | Configuration options for Authoritative server | |
containerSecurityContext |
k8s: SecurityContext |
SecurityContext applied to all containers for all components. containerSecurityContext explicitly configured on an instance set takes precedence over this. Default: Specified on the component level |
|
dnsdists |
Map of dnsdist | Configuration options for dnsdist | |
dstoredists |
Map of dstoredist | Configuration options for dstoredist | |
filterSettings |
Map of FilterSettings | Configuration options for filtering | |
global |
Global | Configuration options for important global usage within the Cloud Control Helm Charts | |
ipFamily |
IPFamily | Configuration options for cluster networking | |
podAnnotations |
k8s: Annotations |
{} |
Annotations to be added to all pods. podAnnotations explicitly configured on an instance set takes precedence over this |
podDisruptionBudget |
k8s: PodDisruptionBudgetSpec |
{} |
Spec of PodDisruptionBudget to be applied to all deployments. podDisruptionBudget explicitly configured on an instance set takes precedence over this |
podLabels |
k8s: Labels |
{} |
Labels to be added to all pods. podLabels explicitly configured on an instance set takes precedence over this |
podSecurityContext |
k8s: PodSecurityContext |
SecurityContext to be applied to all pods. podSecurityContext explicitly configured on an instance set takes precedence over this. Default: Specified on the component level |
|
prometheus |
Prometheus | Configuration options for automatic Prometheus scraping if the Prometheus Operator is available | |
recursors |
Map of Recursor | Configuration options for Recursor | |
resolvers |
Map of Resolver | Configuration options for resolvers | |
resourceDefaults |
boolean |
false |
If true, apply default resource limits to each container |
rulesets |
Map of Rulesets | Configuration options for dnsdist rulesets | |
serviceLabels |
k8s: Labels |
{} |
Labels to be added to all services. serviceLabels explicitly configured on an instance set takes precedence over this |
tolerations |
List of k8s: Tolerations |
Tolerations to be applied to all pods. tolerations explicitly configured on an instance set takes precedence over this |
|
userBackends |
Map of UserBackend | Configuration options for user backends | |
volumes |
List of Volume | Configuration options for extra volumes on all instances | |
zonecontrols |
Map of ZoneControl | Configuration options for ZoneControl |
Global
Global options for this Helm Chart allow for the configuration of:
- Image pull secrets to configure access to the OX registry or a private cache/intermediary
- Compatibility mode for supported non-standard Kubernetes platforms
Example of using global
to configure a private registry where you stored copies of the Cloud Control container images:
Or to configure Cloud Control to use a pre-existing Secret containing your registry credentials named my-registry-credentials
:
Parameters which can be used:
Parameter | Type | Default | Description |
---|---|---|---|
image |
ImageOverrides | {} |
Overrides to configure where container images are pulled from. Default: The OX registry |
imagePullSecrets |
Map of ImagePullSecret | {} |
Image pull secrets for which Secrets should be created and then used by the service accounts to pull container images from the registry. Recommendation: pre-provision the actual Secrets in your namespace and reference them using imagePullSecretsList |
imagePullSecretsList |
List of string |
[] |
List of names of Secrets which should be used by service accounts to pull container images from the registry |
openshift |
OpenShift | {} |
Configuration of OpenSHift compatibility mode. Default: disabled |
Image Overrides
You can configure the Helm Chart to ensure Kubernetes pulls container images from a different location than the OX registry. For example:
Parameters which can be used:
Parameter | Type | Default | Description |
---|---|---|---|
registry |
string |
registry.open-xchange.com |
Override the base hostname of the URI from where container images are pulled |
repository |
string |
Override the repository from which the container images are pulled. Default: Varies based on the type of component |
|
pullPolicy |
string |
"IfNotPresent" |
Force an image pull policy for all containers |
Image Pull Secret
You can configure the Helm Chart to create Secrets for one or more sets of credentials to use to authenticate against a registry. Each entry should be a key-value pair, with:
- key: Name of the secret
- value: Dictionary holding the configuration of the secret
For example, to have an image pull secret with the name myIPSSecret
to authenticate against the OX registry:
global:
image:
imagePullSecrets:
myIPSSecret:
registry: registry.open-xchange.com
username: <USERNAME>
password: <PASSWORD>
email: admin@example.com
Parameters which can be used:
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
registry |
string |
yes |
Base URI of the registry | |
username |
string |
yes |
Username to authenticate with | |
password |
string |
yes |
Password for authentication | |
email |
string |
yes |
Email address to satisfy Kubernetes requirements for an image pull secrets in the desired format. Can contain dummy data as long as it satisfies the standard format of an email address |
OpenShift
OpenShift requires some specific default settings in Cloud Control to be adjusted to satisfy the platform's requirements. You can configure this Helm Chart to deploy in OpenShift compatibility mode using the following example:
Parameters which can be used:
Parameter | Type | Default | Description |
---|---|---|---|
openshift |
boolean |
false |
If true, enable OpenShift compatibility mode |
Prometheus
If you have the Prometheus Operator installed (either yourself or via the Monitoring Helm Chart), you can enable Cloud Control to automatically deploy the necessary PodMonitor objects to automate metric scraping. You can configure this Helm Chart to deploy the PodMonitor objects using the following example:
Resolvers
If you want to forward/direct DNS traffic to endpoints which are not managed by Cloud Control, you can configure their IPs as external resolvers. Example of an external resolver configuration:
You can now reference externalresolvers
as an endpoint for traffic, in dnsdist or recursor for example.
You can further configure resolvers using the following parameters:
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
ips |
List of string |
yes |
List of IP addresses. Can be both IPv4 and IPv6 |
|
port |
integer |
53 |
Port to send traffic to | |
dnsdist |
dnsdistConfig | Settings to be applied to each resolver endpoint when added to dnsdist as a server |
Resolver: dnsdist Configuration
Using these parameters you can configure additional behaviour when instances of this resolver set are added to dnsdist for loadbalancing:
Parameter | Type | Default | Description |
---|---|---|---|
checkClass |
integer |
1 |
Number to use as QCLASS in the health-check query. Defaults to DNSClass.IN ( 1 ) |
checkInterval |
integer |
The time in seconds between health checks | |
checkName |
string |
"a.root-servers.net" |
String to use as QNAME in the health-check query |
checkTimeout |
integer |
1000 |
The timeout (in milliseconds) of a health-check query |
checkType |
string |
"A" |
String to use as QTYPE in the health-check query |
disableZeroScope |
boolean |
false |
If true, disable the EDNS Client Subnet zero scope feature, which does a cache lookup for an answer valid for all subnets (ECS scope of 0) before adding ECS information to the query and doing the regular lookup. This requires the parseECS option of the corresponding cache to be set to true |
healthCheckMode |
string |
"auto" |
Type of health-check to perform, default is "auto" which is configured using the checkName, checkType, etc parameters. Alternatives are "up" (no healthcheck - always available for traffic) and "down" (no healthcheck - never available for traffic) |
maxCheckFailures |
integer |
1 |
Allow this amount of check failures before declaring the backend down |
mustResolve |
boolean |
false |
If true, the health check must return a RCODE different from NXDomain, ServFail and Refused. Default is false, meaning that every RCODE except ServFail is considered valid |
order |
integer |
The order of servers in this set, used by the leastOutstanding and firstAvailable policies |
|
qps |
integer |
Limit the number of queries per second to this amount, when using the firstAvailable policy |
|
reconnectOnUp |
boolean |
false |
If true, close and reopen the sockets when a server transits from Down to Up |
retries |
integer |
The number of TCP connection attempts to servers for a given query | |
rise |
integer |
1 |
Require NUM consecutive successful checks before declaring the backend up |
setCD |
boolean |
false |
Set the CD (Checking Disabled) flag in the health-check query |
sockets |
integer |
1 |
Number of sockets (and thus source ports) used toward the backend server |
source |
string |
Name of the interface which Dnsdist will use to try to send traffic to this Recursor | |
tcpConnectTimeout |
integer |
The timeout (in seconds) of a TCP connection attempt | |
tcpFastOpen |
boolean |
false |
Whether to enable TCP Fast Open |
tcpRecvTimeout |
integer |
The timeout (in seconds) of a TCP read attempt | |
tcpSendTimeout |
integer |
The timeout (in seconds) of a TCP write attempt | |
useClientSubnet |
boolean |
false |
Add the client IP address in the EDNS Client Subnet option when forwarding the query to this backend |
useProxyProtocol |
boolean |
false |
Add a proxy protocol header to the query, passing along the client IP address and port along with the original destination address and port |
weight |
integer |
1 |
The weight of servers in this set, used by the wrandom , whashed and chashed policies |
User Backends
User Backends are reusable sets of configuration which you can apply to components which require authentication and authorization. Having them defined as their own configuration item allows you to re-use them for multiple components. Components which can utilize these user backends:
- Cloud Control API
- ZoneControl
When configuring a user backend, you always need to supply the following:
Based on the type
you will have different configuration options available. Currently supported backends:
More backends are planned for future Cloud Control releases.
User Backend: LDAP
When the type
is set to ldap
, you can configure the following parameters:
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
attrFirstName |
string |
"givenName" |
Name of LDAP attribute to attempt to take a user's first name from | |
attrLastName |
string |
"sn" |
Name of LDAP attribute to attempt to take a user's last name from | |
attrEmail |
string |
"mail" |
Name of LDAP attribute to attempt to take a user's email address from | |
bindUser |
string |
DN of an LDAP user to connect with for user/group searching. Only used if bindSecretName is not set |
||
bindPassword |
string |
Password of user configured via bindUser . Only used if bindSecretName is not set |
||
bindSecretName |
string |
Name of a pre-existing secret holding DN & password of a user to connect with for user/group searching. Has priority over bindUser and bindPassword |
||
bindSecretUserKey |
string |
"username" |
Name of item in bindSecretName Secret containing the user DN |
|
bindSecretPasswordKey |
string |
"password" |
Name of item in bindSecretName Secret containing the password |
|
cacheTimeout |
integer |
0 |
If configured, allow applications using this userBackend to cache data from LDAP for this duration (in seconds) | |
caSecret |
string |
Name of a pre-existing Secret from which to load CA certificates. Note: Secret must have a ca.crt data item |
||
clientSecret |
string |
Name of a pre-existing Secret from which to load client certificate + key. Note: Secret must have tls.crt and tls.key data items |
||
groupType |
string |
"posixGroup" |
Object class of groups to use for determining group membership. Available options: "posixGroup" "groupOfNames" "groupOfUniqueNames" |
|
groupBases |
List of GroupBase | [] |
List of base locations inside LDAP to search for groups | |
host |
string |
LDAP host to connect to | ||
insecureSkipVerify |
boolean |
false |
Whether or not to skip verification of certificates presented by the LDAP endpoint | |
port |
integer |
389 |
LDAP port to connect to. Only used if host is set |
|
scheme |
string |
"ldap" |
LDAP scheme to connect with. Only used if host is set |
|
uri |
string |
LDAP connect string (Must be full connect string, ie: ldap://my.ldap.service.local:389 ). Has priority over host |
||
userAttr |
string |
"uid" |
Name of LDAP attribute to use to search for a user using their username. Usually sAMAccountName for Active Directory users |
|
userBases |
List of UserBase | [] |
List of base locations inside LDAP to search for users |
Note: You must set either a host
or uri
to ensure the LDAP connector has an address to connect to.
Group bases
Group bases allow you to specify base locations within LDAP to search for groups. You can configure a group base with the following parameters:
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
base |
string |
yes |
Base location for groups | |
scope |
string |
"subtree" |
Scope to search base with. Available options: "base" "onelevel" "subtree" |
|
filters |
dictionary |
{} |
Dictionary of LDAP filters to apply when searching for groups |
An example with 2 group bases defined:
userBackends:
myLDAP:
type: ldap
uri: "ldap://openldap.openldap.svc.cluster.local:636"
groupBases:
- base: "ou=groups,dc=example,dc=org"
- base: "ou=moregroups,dc=example,dc=org"
scope: "onelevel"
filters:
objectClass: "orgGroup"
cn: "internal-*"
The group base for "ou=moregroups,dc=example,dc=org"
has a modified scope and applies filters:
- LDAP group attribute
objectClass
must match"orgGroup"
- LDAP group attribute
cn
must start with"internal-"
User bases
User bases allow you to specify base locations within LDAP to search for users. You can configure a user base with the following parameters:
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
base |
string |
yes |
Base location for users | |
scope |
string |
"subtree" |
Scope to search base with. Available options: "base" "onelevel" "subtree" |
|
filters |
dictionary |
{} |
Dictionary of LDAP filters to apply when searching for users |
An example with a user base defined:
userBackends:
myLDAP:
type: ldap
uri: "ldap://openldap.openldap.svc.cluster.local:636"
userBases:
- base: "ou=users,dc=example,dc=org"
filters:
objectClass: "orgUser"
The user base for "ou=users,dc=example,dc=org"
has a filter applied:
- LDAP user attribute
objectClass
must match"orgUser"
Posix-based LDAP example
A configuration example with a Posix-based LDAP user backend:
userBackends:
myLDAP:
type: ldap
uri: "ldaps://openldap.openldap.svc.cluster.local:636"
caSecretName: "ldap-ca-cert"
bindSecretName: "ldap-bind-user"
userBases:
- base: "ou=users,dc=example,dc=org"
groupBases:
- base: "ou=groups,dc=example,dc=org"
Active Directory example
A configuration example with an Active Directory-based LDAP user backend:
userBackends:
myLDAP:
type: ldap
uri: "ldaps://my.ldap.service:636"
caSecretName: "ad-ca-cert"
bindSecretName: "ad-bind-user"
userAttr: sAMAccountName
userBases:
- base: "ou=users,dc=example,dc=org"
groupType: groupOfNames
groupBases:
- base: "ou=groups,dc=example,dc=org"
Volumes
To configure additional storage for all instances in a Cloud Control installation you can add a volumes
parameter at the top-level:
Parameters which can be used:
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
name |
string |
yes |
Name of the volume | |
mountPath |
string |
yes |
Path to which the volume will be mounted. Make sure this is not generic enough to potentially conflict with other volumes | |
volumeSource |
k8s VolumeSource |
yes |
A valid Kubernetes VolumeSource |
|
allContainers |
boolean |
false |
If true, mount this to all containers. Default: false (Mounts the volume only to container running the core PowerDNS components) |
|
readOnly |
boolean |
false |
If true, mount the volume in read-only mode |
IP Family
Some components require knowledge of the cluster's networking stack in order to function optimally. There are three parameters which should be configured here:
Parameter | Type | Default | Description |
---|---|---|---|
ipv4 |
boolean |
true |
If true, enable IPv4 connectivity on service objects and for internal communication |
ipv6 |
boolean |
false |
If true, enable IPv6 connectivity on service objects and for internal communication |
families |
List of string |
|
Ordered list of preference if both IPv4 and IPv6 are available on the cluster |
There are currently 4 different combinations which you can utilize:
- IPv4: Singlestack - IPv4
- IPv6: Singlestack - IPv6
- IPv46: Dualstack - IPv4 primary
- IPv64: Dualstack - IPv6 primary
Singlestack: IPv4
In an IPv4-only cluster, you can omit the configuration altogether, because this is the default setting for Cloud Control. If you want to be explicit and configure it anyways, it will be the following:
Singlestack: IPv6
In an IPv6-only cluster, the following should be configured:
Dualstack: IPv4 Primary
When running a dualstack networking setup with IPv4 as primary, you can use the following configuration:
Dualstack: IPv6 Primary
When running a dualstack networking setup with IPv6 as primary, you can use the following configuration (note the order of the families
has changed and now lists IPv6
first):