This is the multi-page printable view of this section. Click here to print.
AgileTV CDN Manager (esb3027)
1 - Introduction to AgileTV CDN Manager
The ESB3027 AgileTV CDN Manager is a suite of software designed to facilitate the coordination of one or more instances of ESB3024 AgileTV CDN Director. It provides services and common functionality which are shared between independent routers such as user identity management, API services, and common messaging and data persistence.
The Manager software is deployed locally using a self-hosted Kubernetes orchestration framework. Kubernetes provides many advantages out of the box including automatic horizontal scaling, load-balancing, service monitoring, persistent data storage, and a Cloud-Native deployment infrastructure.
The installer for ESB3027 AgileTV CDN Manager automatically deploys a lightweight Kubernetes implementation using Rancher K3s, along with all necessary tools to manage the cluster.
For the 1.0 release of ESB3027 AgileTV CDN Manager, only one supported configuration is provided. This configuration requires a single physical or virtual machine running Red Hat Enterprise Linux 8 or 9, or a compatible clone such as Oracle Linux. A minimum of 8GB of available RAM is required to deploy the software. During the installation process, one or more packages may be required from the software repositories provided by the Operating System. If the official repositories cannot be reached, the original ISO must be properly mounted such that package installation can be performed.
One additional prerequisite for the installation is a Fully-Qualified Domain Name that is DNS-resolvable and points to the node. This must be set up prior to installing the software, and all APIs must use this DNS name. Attempting to access any APIs via an IP address may result in failures due to Cross-Origin Resource Sharing protections which are enforced.
Please ensure that at a minimum, you are familiar with the terminology from the Glossary.
Quickstart Guide
This guide will give a quick overview of the basic installation procedure. This guide will focus on the basic procedure, for an in-depth overview of each step in the procedure, see the Installation Guide.
Preparation
Make sure SELinux and Firewalld are disabled and the ESB3027 AgileTV CDN Manager
ISO is mounted. For this guide we are assuming that the ISO is mounted on
/mnt/acd-manager
however, any mountpoint may be used.
Install the Cluster
Start by installing the Kubernetes cluster by running the installer script.
/mnt/acd-manager/install
Generate the Configuration Template
There are several scripts available on the ISO which are used to prepare the configuration template. All take a wizard based approach, prompting the user to provide information.
Create an SSL Certificate Secret (Optional)
If you have valid SSL certificates that should be used in place of the built-in
self-signed certificates, you will need both the certificate and key files present
on the node, and to run the script /mnt/acd-manager/generate-ssl-secret
. This
will generate a Kubernetes Secret containing the certificates.
Generate the Zitadel Masterkey (Optional)
It is recommended to generate a unique Master Key used by Zitadel to protect
sensitive information at rest. This can be accomplished using the
./mnt/acd-manager/generate-zitadel-secret
script. This will generate a
cryptographically secure key and create the corresponding Kubernetes secret
containing this value.
Generate the Configuration Template
Once you have prepared all the necessary values, running /mnt/acd-manager/configure
will generate a file values.yaml
in the current directory. Be sure to inspect this
file before proceeding. Verify that all required fields are correctly populated, such as
the addresses of routers, API endpoints, and other components like the Configuration GUI.
Additionally, ensure that any necessary sections are uncommented and that the file adheres
to the expected format for successful deployment.
Deploy the Software
After generating the values.yaml
file in the previous section, install the “acd-manager”
Helm chart using the following command:
helm upgrade --install acd-manager --values values.yaml --atomic --timeout=10m /mnt/acd-manager/helm/charts/acd-manager
When this command returns, all pods in the deployment should be marked ready, and the ESB3027 AgileTV CDN Manager installation is complete. You can verify the state of the cluster with kubectl.
kubectl get pods
2 - Monitoring within the AgileTV CDN Manager Cluster
As of AgileTV CDN Manager 1.2.0, all nodes that are part of the cluster can be monitored using Telegraf, allowing for the collection of hardware metrics. This enables monitoring of the health and performance of any node within the cluster.
Telegraf Installation and Configuration
In the mounted ISO directory for the AgileTV CDN Manager, you will find a
Telegraf RPM in ./Packages
containing the Telegraf RPM:
$ ls -1 /mnt/acd-manager/Packages/
telegraf-1.34.3-1.x86_64.rpm
...
Use this RPM to install Telegraf on all nodes that should be monitored. This RPM is compatible with RHEL 8 and 9 (CentOS, Oracle Linux, etc.).
Telegraf will be configured to achieve the following:
- Collect hardware metrics regarding CPU usage, memory utilization, and disk usage.
- Send the collected metrics to an instance of the service
acd-telegraf-metrics-database
. This service is installed alongside ESB3024 AgileTV CDN Director. See acd-telegraf-metrics-database for more details.
Assuming that ESB3024 AgileTV CDN Director has been installed on the host
director-host
, replace /etc/telegraf/telegraf.conf
with the following
configuration:
[agent]
interval = "10s"
round_interval = true
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
debug = false
quiet = false
logfile = ""
# Output to acd-telegraf-metrics-database instance
[[outputs.influxdb_v2]]
urls = ["http://director-host:8086"]
# If acd-telegraf-metrics-database is configured to use TLS with a self-signed
# certificate, uncomment the following line.
# insecure_skip_verify = true
# If acd-telegraf-metrics-database is configured to use token authentication,
# uncomment the following line.
# token = "@{secretstore:metrics_auth_token}"
# Secret store for storing sensitive data
[[secretstores.os]]
id = "secretstore"
## CPU Metrics
[[inputs.system]]
fieldinclude = ["load1", "load5", "load15"]
## Memory Metrics
[[inputs.mem]]
fieldinclude = ["used_percent", "total"]
## Disk Metrics
[[inputs.disk]]
mount_points = ["/"]
## Ignore file system types.
ignore_fs = ["tmpfs", "devtmpfs", "overlay"]
## Report only the used and free fields.
fieldinclude = ["used_percent", "total"]
Run systemctl restart telegraf
to apply the changes. To verify that
Telegraf is running, execute systemctl status telegraf
or check the Telegraf logs
with journalctl -fu telegraf
.
Additional input plugins can be added to the Telegraf configuration to collect more metrics. Visit the Telegraf plugin directory for a list of available input plugins.
Metrics Token Authentication
If the acd-telegraf-metrics-database
instance is configured to use
token authentication with secrets, special configuration is required to
access the secret store. See Using Secrets for Request Authorization
for more details on how to acd-telegraf-metrics-database
uses tokens.
the token
field in the [[outputs.influxdb_v2]]
section must be uncommented.
The secret value must be equivalent to acd-telegraf-metrics-database
service’s
secret value. To set the secret value, use the following command:
$ sudo -u telegraf telegraf secrets set secretstore metrics_auth_token
Enter secret value:
Note that the command above must be run as the user telegraf
since the
Telegraf service runs as this user.
This command will prompt you to enter the secret value to be stored in the
secret store secretstore
with the key metrics_auth_token
. Note that the
secret store name and secret key must match the values used in the
[[outputs.influxdb_v2]]
section of the Telegraf configuration. The secret
value must be the same as the one used in acd-telegraf-metrics-database
.
Visualizing the Metrics
If Telegraf is running and configured correctly, the metrics will be sent to the
acd-telegraf-metrics-database
service on the host director-host
. This
service is periodically scraped by the same host’s Prometheus instance, which is
used to visualize the metrics in Grafana. Grafana is accessible at http://director-host:3000
under the metric names:
disk_total
disk_used_percent
mem_total
mem_used_percent
system_load1
system_load15
system_load5
3 - Installation
3.1 - Installing the 1.2.1 Release
Prerequisites
- One or more physical or virtual machines running RedHat Enterprise Linux 8 or 9 or compatible operating system compatible with at least the minimum specifications listed in the Dimensioning Guide
- A resolvable fully-qualified DNS hostname pointing to the node.
For the 1.2.1 Release of ESB3027 AgileTV CDN Manager, the following limitations are in place:
- The release does not support
firewalld
orSELinux
inenforcing
mode. These services MUST be disabled before starting.
sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config
setenforce 0
systemctl disable --now firewalld
- Installing the software depends on several binaries that will be placed
in the
/usr/local/bin
directory on the node. On RHEL based systems, if a root shell is obtained usingsudo
, that directory will not be present in the path. Either you must configuresudo
to not exclude that directory from the path, or usesu -
which does not exhibit this behavior. You can verify if that directory is included in the path with the following command:
echo $PATH | grep /usr/local/bin
Networking Requirements
If the node does not have an interface with a default route, a default route must be configured; even a black-hole route via a dummy interface will suffice. K3s requires a default route in order to auto-detect the node’s primary IP, and for cluster routing to function properly. To add a dummy route do the following:
ip link add dummy0 type dummy
ip link set dummy0 up
ip addr add 203.0.113.254/31 dev dummy0
ip route add default via 203.0.113.255 dev dummy0 metric 1000
Installing the Manager
The following procedure will assume that the latest ISO for the Manager has been copied to the node.
- Mount the ESB3027 ISO. For this example, we are assuming that the ISO
will be mounted on
/mnt/acd-manager
but any mountpoint may be used. Substitute the actual mountpoint for/mnt/acd-manager
in all following commands.
mkdir -p /mnt/acd-manager
mount -o loop,ro esb3027-acd-manager-1.2.1.iso /mnt/acd-manager
- Run the
install
script from the ISO.
/mnt/acd-manager/install
If additional worker nodes are being used, they should ideally be added right after running the installer. See the Clustering Guide for more information.
Generating the Configuration
The following instructions should all be executed from the primary node unless otherwise specified.
Loading SSL Certificates (Optional)
If you wish to include valid SSL certificates for the Manager, you must first
load them into a Kubernetes Secret. A helper script generate-ssl-secret
has
been provided on the ISO to read the Certificate and Key files and create the
Kubernetes Secret for you. If this is not performed, the Manager will
generate and use a self-signed certificate automatically.
The generate-ssl-secret
script uses a simple wizard approach to prompt for
the files containing the SSL certificate and SSL Private Key file, and the name
of the secret. The name can be any string containing alpha-numeric characters
and either -
or _
.
/mnt/acd-manager/generate-ssl-secret
Enter the path to the SSL Certificate File: /root/example.com.cert
Enter the path to the SSL Key File: /root/example.com.key
Enter the name of the secret containing the SSL Certificate and Key (e.g. my-ssl-certs): example-com-ssl-key
secret/example-com-ssl-key created
Generating the Zitadel Masterkey (Optional)
Zitadel is a User Identity Management software used to manage authentication across the system. Certain data in Zitadel such as credential information needs to be kept secure. Zitadel uses a 32-byte cryptographic key to protect this information at rest, and highly recommends that this key be unique to each environment.
If this step is not performed, a default master key will be used instead. Using a hardcoded master key poses significant security risks, as it may expose sensitive data to unauthorized access. It is highly recommended to not skip this step.
The generate-zitadel-secret
script generates a secure 32-byte random key and
loads that key into a Kubernetes Secret for Zitadel to use. This step should
only be performed once, as changing the key will result in all encrypted
credential information being unrecoverable. For the secret name, any string
containing alpha-numeric characters and either -
or _
is allowed.
/mnt/acd-manager/generate-zitadel-secret
Enter the name of the secret containing the Zitadel Masterkey (e.g. my-zitadel-key): zitadel-key
secret/zitadel-key created
Generating the Configuration Template
The last step prior to installing the Manager software is to generate a
values.yaml
file, which is used by Helm to provide configuration values
for the deployment, to configure the installation. This can be performed
by running the script configure
on the ISO. This script will
prompt the user for a few pieces of information, including the “external
domain” which is the DNS resolvable fully-qualified domain name which must
be used to access the manager software, and the optional names of the
secrets generated in the previous sections. Ensure that sufficient write
privileges exist to write to the current directory.
/mnt/acd-manager/configure
Enter the domain you want to use for Zitadel (e.g. zitadel.example.com): manager.example.com
Enter the name of the secret containing the SSL Certificate and Key (e.g. my-ssl-certs): example-com-ssl-key
Enter the name of the secret containing the Zitadel Masterkey (e.g. my-zitadel-key): zitadel-key
Wrote /root/values.yaml
WARNING! The domain used for Zitadel MUST be resolvable by any client which accesses zitadel, and this value cannot be modified once the acd-manager helm chart is deployed. Ensure that this address can be resolved.
After running this wizard, a values.yaml
file will be created in the
current directory. Within this file, there is a commented out section
containing the routers, gui, and geoip addresses. This should optionally
be filled out before continuing so that it resembles the following
structure.
NOTE: This file is YAML, and indentation and whitespace are part of the format. It is recommended before continuing, that you paste the contents of the file into an online YAML validator to ensure that the syntax is OK. You can use tools like yamllint.com or any other trusted YAML validator.
gateway:
configMap:
annotations: []
routers:
- name: router1
address: 10.16.48.100
- name: router2
address: 10.16.48.101
gui:
host: 10.16.48.100
port: 7001
geoip:
host: 10.16.48.100
port: 5003
ingress:
tls:
- hosts:
- 10.16.48.140.sslip.io
secretName: null
Loading Extra Container Images for Air-Gapped Sites
If the cluster is being deployed in an air-gapped environment, the thrid-party container images will need to be loaded onto the node before deploying the manager software. These images are provided on the second “extras” ISO.
The following procedure will assume that the latest extras ISO for the Manager has been copied to the node.
- Mount the ESB3027 Extras ISO. For this example, we are assuming that the ISO
will be mounted on
/mnt/acd-manager-extras
but any mountpoint may be used. Substitute the actual mountpoint for/mnt/acd-manager-extras
in all following commands.
mkdir -p /mnt/acd-manager-extras
mount -o loop,ro esb3027-acd-manager-extras-1.2.1.iso /mnt/acd-manager-extras
- Run the
load-images
command from the ISO.
/mnt/acd-manager-extras/load-images
After the command completes, the third-party container images should be present on
the node. These can be viewed by running k3s crictl images
. You should see
in the output, images for “zitadel”, “kafka”, and “redis” as well as a few others.
Deploying ESB3027 AgileTV CDN Manager
The ESB3027 software is deployed using Helm.
helm install acd-manager \
/mnt/acd-manager/helm/charts/acd-manager \
--values values.yaml \
--atomic \
--timeout=10m
The use of the --atomic
and --timeout
flags will cause Helm to wait up
to 10 minutes for all pods to be in the Ready state. For example, a timeout
might occur if there are insufficient resources available in the cluster or
if a misconfiguration, such as incorrect environment variables or missing
secrets, prevents a pod from starting. If the timeout is reached
before all pods are Ready, the entire installation will be automatically
rolled back.
Updating the Deployment
In order to update an existing Helm deployment, whether to modify configuration
values, or to upgrade to a newer software version, you must use the helm upgrade
command. The syntax of this command is exactly the same as for helm install
,
and the same parameters used at install time must be provided. A shortcut option
exists for the helm upgrade
command --install
which if supplied, will upgrade
an existing deployment or install a new deployment if one is not already present.
helm upgrade acd-manager \
/mnt/acd-manager/helm/charts/acd-manager \
--values values.yaml \
--atomic \
--timeout 10m
Verifying the Installation
Verify the Ready
status of the Running
pods with the following command.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
acd-manager-gateway-6d489c5c66-jb49d 1/1 Running 0 20h
acd-manager-gateway-test-connection 0/1 Completed 0 17h
acd-manager-kafka-controller-0 1/1 Running 0 20h
acd-manager-redis-master-0 1/1 Running 0 20h
acd-manager-rest-api-668d889b76-sn2v4 1/1 Running 0 20h
acd-manager-selection-input-5fc9f4df4c-qz5mw 1/1 Running 0 20h
acd-manager-zitadel-ccb5d9674-9qpn5 1/1 Running 0 20h
acd-manager-zitadel-init-6l8vg 0/1 Completed 0 20h
acd-manager-zitadel-setup-bmbh9 0/2 Completed 0 20h
postgresql-0 1/1 Running 0 20h
Each pod in the output contains a Ready
status. This represents the number of
pod replicas which are in the Ready state, as compared to the number of
desired replicas provisioned by the deployment. Pods marked as “Completed”
are typically one-time jobs or initialization tasks that have run to completion
and finished successfully.
4 - Clustering
Introduction
This guide explains how to effectively use a multi-node Kubernetes cluster to achieve high availability with the ESB3027 AgileTV CDN Manager. The manager runs on a self-hosted K3s cluster, offering a lightweight yet scalable platform for deploying essential services with high availability. The cluster consists of one or more nodes which may be geographically distributed between multiple datacenters for additional redundancy.
The Kubernetes cluster consists of a series of nodes with either the role
“Server” or “Agent”. Server nodes comprise the Kubernetes control-plane, and
provide the necessary services and embedded etcd
datastore to manage the state
of the cluster. Agent nodes are responsible solely for running workloads. When
deploying the Manager service for High Availability, a minimum of three server
nodes must be deployed, and the total number of server nodes must always be odd.
Additional cluster capacity can be achieved by adding zero or more agent nodes to
the cluster. It is highly recommended that server nodes be geographically
distributed across different datacenters if multiple datacenters are available.
By default, workloads will be assigned to any node in the cluster with available
capacity, however it is possible to apply a taint to one or more server nodes to
prevent non-critical workloads from being assigned there, effectively creating an
arbiter node. When installing the Manager service, the first node deployed will
adapt the server role. Additional server and agent nodes can be added at any time.
Persistent storage volumes are provided by Longhorn, a distributed storage driver which ensures persistent, reliable storage, across the cluster. Longhorn creates a cluster of replicated storage volumes, accessible from any physical node, while persisting data in local storage on the node. This decouples the workload from the storage, allowing the workloads to run on any node in the cluster.
Before considering adding additional nodes to the cluster, ensure that any required ports have been opened according to the documentation in the Networking Guide.
Expanding the Cluster
Before expanding the cluster, it is required that the user be familiar with
the standard installation procedure as described in the
Installation Guide. At a minimum, on the primary node,
the install
command must have been performed to initialize the cluster.
Before continuing, a K3s token must be obtained from any server node in the
cluster. This token is used by the K3s installer to authenticate additional
nodes. The token can be found on any installed server node at
/var/lib/rancher/k3s/server/node-token
.
For each additional node, mount the ESB3027 AgileTV CDN Manager ISO, and execute
either the join-server
or join-agent
command from the root of the ISO.
It is critical that each node has a unique hostname. If that is not the case,
set a unique name for each node in the K3S_NODE_NAME
environment variable
before running this command.
Both commands take two arguments, the first being the URL to any one of the
server nodes, this is of the form https://node:6443
, and the second is the
K3s token obtained earlier.
To add an additional “Server” node use the following:
/mnt/acd-manager/join-server https://k3s-server:6443 abcdef0123456...987654321
To add an additional “Agent” node use the following:
/mnt/acd-manager/join-agent https://k3s-server:6443 abcdef0123456...987654321
After the command completes, the additional node should appear in the node list as Ready. From any server node, execute the following:
kubectl get nodes
Configuring Longhorn
The default configuration for Longhorn makes several assumptions about how the persistent volume configuration should be managed. It is recommended to update the Longhorn configuration to suit the environment prior to deploying the Manager’s helm charts, since some settings can not be changed once data exists on a volume.
Longhorn provides a frontend UI which is not exposed by default. This is an intentional security precaution. In order to access the frontend UI, a Kubernetes port forward must be used. The command for setting up the port forward is listed below:
kubectl port-forward -n longhorn-system --address 0.0.0.0 svc/longhorn-frontend 8888:80
This will forward traffic from any node in the cluster on port 8888 to the
Longhorn frontend UI service. Open a browser to http://k3s-server:8888
to
visit the UI and adjust settings as necessary. After finishing with the
Longhorn UI, pressing Ctrl+C
on the port forward command will close the
tunnel.
Some settings which should be considered include setting up “Default Data Locality”, the “Default Data Path”, the “Default Replica Count”, and the Backup settings.
5 - Networking
K3s Cluster Networking
The following table describes the required ports which must be open between the various nodes in the Kubernetes cluster. In this table “Servers” represents the primary node(s) in the cluster, and “Agents” represents any additional worker nodes which have joined the cluster. For more information see the Official K3s Networking Documentation.
Protocol | Port | Source | Destination | Description |
---|---|---|---|---|
TCP | 2379-2380 | Servers | Servers | Required only for HA with embedded etcd |
TCP | 6443 | Agents | Servers | K3s supervisor and Kubernetes API Server |
UDP | 8472 | All nodes | All nodes | Required only for Flannel VXLAN |
TCP | 10250 | All nodes | All nodes | Kubelet metrics |
UDP | 51820 | All nodes | All nodes | Required only for Flannel Wireguard with IPv4 |
UDP | 51821 | All nodes | All nodes | Required only for Flannel Wireguard with IPv6 |
TCP | 5001 | All nodes | All nodes | Required only for embedded distributed registry (Spegel) |
TCP | 6443 | All nodes | All nodes | Shared with Kubernetes API Server; used for embedded distributed registry (Spegel) |
Note: Port 6443 is used for both the Kubernetes API Server and the embedded distributed registry (Spegel). Ensure that your network configuration accounts for this dual use to avoid conflicts.
6 - Dimensioning
Recommended Configuration
The following is the minimum recommended configuration for running in a production environment.
- Cluster Size: 3 Server Nodes + 0 or more Agent Nodes
- Server Node Specifications:
- OS: RedHat Enterprise Linux 8 or 9, or Compatible.
- Memory: 16GB minimum, 32GB recommended.
- Storage: 64GB minimum, 128GB recommended available in
/var/lib/longhorn
- CPU: A minimum of 4vCPUs per node
- Agent Node Specifications:
- OS: RedHat Enterprise Linux 8 or 9, or Compatible.
- Memory: 16GB minimum, 32GB recommended.
- Storage: 64GB minimum, 128GB recommended available in
/var/lib/longhorn
- CPU: 8vCPUs per node
- Networking: Consistent network connectivity between nodes, with proper port configurations as described in the Networking Guide.
- Nodes may be geographically distributed in different data centers for redundancy. See the Clustering Guide for more information.
Performance Benchmarking
The ESB3027 AgileTV CDN Manager is not currently involved in the client request path, and as such, there is no direct relationship between the performance of the Manager and request rate of the ESB3024 AgileTV CDN Director.
It should be noted, however, that the embedded Kafka instance provided by the Manager is used by the Director for synchronization, and as such, the performance of Kafka can effect the synchronization of events. As determined through several corroborating published benchmarks, it is expected that the throughput of Kafka with the above recommended configuration will be roughly 400,000 messages per second, with the ability to buffer several hours worth of message.
7 - Releases
7.1 - Release esb3027-1.2.1
Build date
2025-05-22
Release status
Type: production
Compatibility
This release is compatible with the following product versions:
- AgileTV CDN Director, ESB3024-1.20.1
Breaking changes from previous release
None
Change log
- FIXED: Installer changes ownership of /var, /etc/ and /usr [ESB3027-146]
- FIXED: K3s installer should not be left on root filesystem [ESB3027-149]
Deprecated functionality
None
System requirements
- A minimum CPU architecture level of x86-64-v2 due to inclusion of Oracle Linux 9 inside the container. While all modern CPUs support this archetecture level, virtual hypervisors may default to a CPU type that has more compatibility with older processors. If this minimum CPU architecture level is not attained the containers may refuse to start. See Operating System Compatibility and Building Red Hat Enterprise Linux 9 for the x86-64-v2 Microarchitecture Level for more information.
Known limitations
Installation of the software is only supported using a self-hosted configuration.
7.2 - Release esb3027-1.2.0
Build date
2025-05-14
Release status
Type: production
Compatibility
This release is compatible with the following product versions:
- AgileTV CDN Director, ESB3024-1.20.1
Breaking changes from previous release
None
Change log
- NEW: Remove
.sh
extension from all scripts on the ISO [ESB3027-102] - NEW: The script
load-certificates.sh
should be calledgenerate-ssl-secret
[ESB3027-104] - NEW: Add support for High Availability [ESB3027-108]
- NEW: Enable the K3s Registry Mirror [ESB3027-110]
- NEW: Support for Air-Gapped installations [ESB3027-111]
- NEW: Basic hardware monitoring support for nodes in K8s Cluster [ESB3027-122]
- NEW: Separate docker containers from ISO [ESB3027-124]
- FIXED: GUI is unable to make DELETE request on api/v1/selection_input/modules/blocked_referrers [ESB3027-112]
Deprecated functionality
None
System requirements
- A minimum CPU architecture level of x86-64-v2 due to inclusion of Oracle Linux 9 inside the container. While all modern CPUs support this archetecture level, virtual hypervisors may default to a CPU type that has more compatibility with older processors. If this minimum CPU architecture level is not attained the containers may refuse to start. See Operating System Compatibility and Building Red Hat Enterprise Linux 9 for the x86-64-v2 Microarchitecture Level for more information.
Known limitations
Installation of the software is only supported using a self-hosted configuration.
7.3 - Release esb3027-1.0.0
Build date
2025-04-17
Release status
Type: production
Compatibility
This release is compatible with the following product versions:
- AgileTV CDN Director, ESB3024-1.20.0
Breaking changes from previous release
None
Change log
This is the first production release
Deprecations from previous release
None
System requirements
- A minimum CPU architecture level of x86-64-v2 due to inclusion of Oracle Linux 9 inside the container. While all modern CPUs support this archetecture level, virtual hypervisors may default to a CPU type that has more compatibility with older processors. If this minimum CPU architecture level is not attained the containers may refuse to start. See Operating System Compatibility and Building Red Hat Enterprise Linux 9 for the x86-64-v2 Microarchitecture Level for more information.
Known limitations
Installation of the software is only supported using a self-hosted, single-node configuration.
8 - Glossary
- Access Token
- A credential used to authenticate and authorize access to resources or APIs on behalf of a user, usually issued by an authorization server as part of an OAuth 2.0 flow. It contains the necessary information to verify the user’s identity and define the permissions granted to the token holder.
- Bearer Token
- A type of access token that allows the holder to access
protected resources without needing to provide additional
credentials. It’s typically included in the HTTP Authorization
header as
Authorization: Bearer <token>
, and grants access to any resource that recognizes the token. - Chart
- A Helm Chart is a collection of files that describe a related set of Kubernetes resources required to deploy an application, tool, or service. It provides a structured way to package, configure, and manage Kubernetes applications.
- Cluster
- A group of interconnected computers or nodes that work together as a single system to provide high availability, scalability and redundancy for applications or services. In Kubernetes, a cluster usually consists of one primary node, and multiple worker or agent nodes.
- Confd
- An AgileTV backend service that hosts the service configuration. Comes with an API, a CLI and a GUI.
- ConfigMap (Kubernetes)
- A Kubernetes resource used to store non-sensitive configuration data in key-value pairs, allowing applications to access configuration settings without hardcoding them into the container images.
- Containerization
- The practice of packaging applications and their dependencies into lightweight portable containers that can run consistently across different computing environments.
- Deployment (Kubernetes)
- A resource object that provides declarative updates to applications by managing the creation and scaling of a set of Pods.
- Director
- The AgileTV Delivery OTT router and related services.
- ESB
- A software bundle that can be separately installed and upgraded, and is released as one entity with one change log. Each ESB is identified with a number. Over time, features and functions within an ESB can change.
- Helm
- A package manager for Kubernetes that simplifies the development and management of applications by using pre-configured templates called charts. It enables users to define, install, and upgrade complex applications on Kubernetes.
- Ingress
- A Kubernetes resource that manages external access to services within a cluster, typically HTTP. It provides routing rules to manage traffic to various services based on hostnames and paths.
- K3s
- A lightweight Kubernetes cluster developed by Rancher Labs. This is a complete Kubernetes system deployed as a single portable binary.
- K8s
- A common abbreviation for Kubernetes.
- Kafka
- Apache Kafka is an open-source distributed event streaming platform designed for building real-time data pipelines and streaming applications. It enables the publication, subscription, storage, and processing of streams of records in a fault-tolerant and scalable manner.
- Kubectl
- The command-line tool for interacting with Kubernetes clusters, allowing users to deploy applications, manage cluster resources, and inspect logs or configurations.
- Kubernetes
- An open-source container orchestration platform designed to automate scaling, and management of containerized applications. It enables developers and operations teams to manage complex applications consistently across various environments.
- LoadBalancer
- A networking tool that distributes network traffic across multiple servers or Pods to ensure no single server becomes overwhelmed, improving reliability and performance.
- Manager
- The AgileTV Management Software and related services.
- Namespace
- A mechanism for isolating resources within a Kubernetes cluster, allowing multiple teams or applications to coexist without conflict by providing a scope for names.
- OAuth2
- An open standard for authorization that allows third-party applications to gain limited access to a user’s resources on a server without exposing the user’s credentials.
- Pod
- The smallest deployable unit in Kubernetes that encapsulates one or more containers, sharing the same network and storage resources. It serves as a logical host for tightly coupled applications, allowing them to communicate and function effectively within a cluster.
- Router
- Unless otherwise specified, an HTTP router that manages an OTT session using HTTP redirect. There are also ways to use DNS instead of HTTP.
- Secret (Kubernetes)
- A resource used to store sensitive information, such as passwords, API keys, or tokens in a secure manner. Secrets are encoded in base64 and can be made available to Pods as environment variables or mounted as files, ensuring that sensitive data is not exposed in the application code or configuration files.
- Service (Kubernetes)
- An abstraction that defines a logical set of Pods and a policy to access them, enabling stable networking and load balancing to ensure reliable communication among application components.
- Session Token
- A session token is a temporary, unique identifier generated by a server and issued to a user upon successful authentication.
- Stateful Set (Kubernetes)
- A Kubernetes deployment which guarantees ordering and uniqueness of Pods, typically used for applications that require stable network identities and persistent storage such as with databases.
- Topic (Kafka)
- A category or feed name to which records (messages) are published. Messages flow through a topic in the order in which they are produced, and multiple consumers can subscribe to the stream to process the records in real time.
- Volume (Kubernetes)
- A persistent storage resource in Kubernetes that allows data to be stored and preserved beyond the lifecycle of individual Pods, facilitating data sharing and durability.
- Zitadel
- An open-source identity and access management (IAM) platform designed to handle user authentication and authorization for applications. It provides features like single-sign-on (SSO), multi-factor authentication (MFA), and support for various authentication protocols.