Usage records can be collected by leveraging Prometheus agent and Prometheus Exporter protocol. Among other usage measurement and collection methods, agent-measurement method combines both flexibility and easy maintenance. It's easy enough to launch, and can be configured and extended to do power lifting. For comparison with other usage measurement and collection methods, see Measure Usage and Collect Data chapter for full documentation.
Prometheus is one of the most popular monitoring and observability products. Paigo leverages Prometheus for agent-based integration method. There are two steps in agent-based integration method: usage measurement and usage collection.
- Usage Measurement: The usage data can be measured from any program compatible with Prometheus Exporter protocol, including Prometheus official exporters, Community-supported exporters, 3rd-party exporters or any custom exporters. See a sample list of available exporter here.
- Usage Collection: The usage data can be collected to Paigo backend with Prometheus agent, which takes data from exporters. Paigo API has a Prometheus protocol compatible endpoint that listens for usage data.
The Prometheus agent can be deployed alongside where SaaS application runs (such as Virtual machine, Container, or Kubernetes) across different platforms (such as AWS, Azure, GCP, Bare Metal, Data Center, or even local laptop). Paigo applies server-side transformation to clean, join, filter or transform the raw data sent from Prometheus to maintain the usage record for billing and pricing purpose.
The code samples of configuring and deploying agents and exporters can be found in this repository.
To follow the sample deployment, the following components are required in the environment.
This repository contains a sample deployment to leverage Prometheus to gather the usage of Elastic Kubernetes Service on AWS, and collect them to Paigo API. However, Paigo API can receive a wide range of usage data that are in Prometheus protocol. In the sample deployment, there are two components installed:
1. Clone the repository locally.
<Fill_ME_IN>with client ID and secret ID for the Paigo API in the
3. (OPTIONAL) Authenticate with a Kubernetes cluster with the following command so that the deployment of next step will be authenticated. This step is required for Kubernetes cluster on AWS EKS.
aws eks update-kubeconfig --name your-cluster-name-here
5. Install and start Paigo agent with the following command.
helm upgrade paigo-agent ./chart --install
6. Verify the agent is running successfully with the following command.
kubectl get pods -n paigo-billing
There should be a pod running with name string starting with
paigo-agent-and the status should be RUNNING.
Exporters and agents collect a large volume of data and send over to Paigo API. However, not all data is counted as usage record for billing or pricing. In order for the data collected by exporter to be linked to a dimension in Paigo and attribute the usage to customers correctly, additional metadata is required. Label the pods running in Kubernetes cluster with the following schema in order for the agent and Paigo Usage Measurement and Collection engine to treat the data as usage record.
paigoDimensionId: REQUIRED. The Dimension ID of the dimension this usage record is associated with, assigned by Paigo during dimension creation. Example:
paigoCustomerId: REQUIRED. The Customer ID of a customer this usage record attributes to, assigned by Paigo during customer creation. Example:
Using the following sample command to assign tags to pods of Kubernetes cluster:
kubectl label pods $REPLACE_WITH_POD_NAME \
Use the following sample command to stop and undeploy the package of agent and exporter:
helm uninstall paigo-agent
A measurement is required only for usage-based cost. For billing purpose, this step can be skipped.
Navigate to Measurement tab and click New Measurement button to see the Measurement Template Table. Choose from the table one of the agent-based measurement templates to open the measurement creation form. In the creation form, some fields are pre-filled based on the template. Provide values to other required or optional fields. Instructions for some of the fields in the form:
- Measurement Frequency: This field dictates how frequent will Paigo calculates the raw usage data per tenant. The only supported mode is
Automatic. Under this mode, Paigo decides the best frequency to sample usage based on many factors, such as type of infrastructure, platform, region, success rate, API throttling, etc.
The next step is to link measurement to dimensions. Dimension represents the abstraction concept of a product metric, whereas measurement represents the implementation of usage measurement and collection. When a measurement ID is used in the Measurement ID field of a Dimension, the measured usage data will be treated as the data points for that particular Dimension.
Compute time of Kubernetes Pods is the length of collective period of time that Pods run in good state. Paigo measures the running time of Kubernetes Pods with agent-based method. With automatically-decided frequency, Paigo will collect the usage amount of all qualified pods, and attribute usage to the right customer automatically. The measurement frequency is automatic as Paigo collects various set information from the agent at different times, and perform transformation on the data such as join to record atomic usage data.
For a Kubernetes Pod to be qualified for usage calculation, the following conditions must be met:
- The Kubernetes cluster must be viewable by the role Paigo assumes, as specified in measurement configuration.
- Kubernetes Pods must be in READY state. All other states of Pods are not counted towards usage calculation.
For multiple qualified Pods, Paigo will calculate the usage of a sample period as the sum of all running time. For example, if Paigo samples usage every 1 minute and there are three qualified Pods, the total usage measured by Paigo will be 3 minutes.
The metadata collected on each usage record will be all the property on the pods, such as cluster information and other metadata.