Skip to content

Bakerloo testnet infrastructure


You do not need to know, or understand, the details presented here, if you just want to use the testnet in the ways explained in Quickstart. However, for more sophisticated node operations, it may be useful to see how Clearmatics has set up its part of the testnet infrastructure, as similar setups may also work for your organisation

This section gives an overview of the configuration of the computing infrastructure (clusters, virtual machines and containers) hosting the nodes that Clearmatics runs as a node operator on the Bakerloo testnet, as well as a description of the metrics and logging stacks used by Clearmatics to monitor its nodes.

As an entity operating one or many nodes on the Bakerloo testnet, you may replicate the following configuration in your environment, or develop your own to meet your requirements.

Clearmatics' computing infrastructure

The diagrams below depict the deployment infrastructure for the Bakerloo testnet node network, with the validator nodes split equally across Google Cloud Platform (GCP) and Amazon Web Services (AWS), with additional GCP clusters housing the public RPC endpoint nodes, and the monitoring stack, respectively. Based on virtualised computing infrastructure elements, the diagrams show the layered design of the solution Clearmatics implemented, starting with the Kubernetes clusters on two separate cloud providers, then the Kubernetes nodes and Kubernetes master, and finally the individual Kubernetes pods within Kubernetes nodes, which contain the Autonity Go Client nodes, such as Validator0.

GCP K8 Setup

The GCP validator node cluster and the AWS validator node cluster have the same structure, except the GCP nodes each have an extra Prometheus Kubernetes pod. This is because the Prometheus containers on GCP are deployed automatically by GCP to collect metrics for Google Operations / Stackdriver to be viewed in GCP. For Clearmatics metrics solution, telegraf and InfluxDB are used instead (see section below, on Logging and Metrics Pipeline).

The other difference is that Node1 and Node2 on the GCP cluster each have additional Kubernetes pods, rpc1 and rpc2, respectively. These contain Autonity full nodes, but instead of acting as validator nodes, they are used to expose the blockchain to incoming connections. Access to these nodes' RPC endpoints can be made available to Clearmatics partners on request.

AWS K8 Setup

There is an additional GCP cluster that houses the two fully public RPC endpoints, rpc3 and rpc4. rpc3 is equipped with a Clearmatics-signed certificate on its https interface (implemented by an NGINX reverse proxy), whereas rpc4 uses a certificate signed by a Certificate Authority. Information how to connect to these endpoints can be found here.

Public RPC Setup

Clearmatics' monitoring infrastructure

The monitoring network consists of three Kubernetes nodes hosted on another GCP cluster. There are three nodes hosted in different zones for resiliency.

  • the Elasticsearch pods in each node are used to index and store log data from the Autonity network
  • the Kibana pods in Node0 and Node1 are used to search and visualise the log data
  • the influxDB pod in Node2 is used to store metrics from the Autonity network, and the Grafana dashboard pod in Node0 is used to visualise the logs
  • the Fluentd and Prometheus pods in each node in the monitoring network are daemon sets, created automatically when a Kubernetes node is deployed, and are used by stackdriver to view metrics and logs within GCP

Further information on the logging and metrics pipeline is given below.

K8 Metrics Setup

Logging and metrics pipeline

There are two separate stacks for logging and metrics, both shown in the image below.

The logging pipeline works as follows:

  • each container that is managed by Clearmatics streams its stdout/stderr to a filesystem (/var/log/containers/<container_name>) on the node where it is running

  • Fluentd then picks up this log file and ships the log data to Elasticsearch on the monitoring network where it is indexed and stored. Fluentd runs as a daemon set in each Kubernetes cluster, so there is always a Fluentd container running on the same node as the validator nodes (Autonity container)

  • Kibana is then used to facilitate search and visualisation of log data

The metrics pipeline works as follows:

  • within each Kubernetes node, telegraf is deployed, which collects the Autonity metrics produced by the Autonity container (Validator0 etc)

  • after adding tags to the Autonity metrics such as afnc_name, telegraf sends the metrics to influxDB database in the monitoring cluster

  • the Grafana Dashboard, also within the monitoring network, uses the influxDB database as a source, so the Autonity metrics on the database are visualised in the Grafana Dashboard

Metrics Pipeline


If you need help, you can: