🚀 Announcing BYOC and the OpenTelemetry Distribution BuilderRead more

Google Cloud Storage Rehydration

Supported Types

MetricsLogsTraces
✓✓✓

How It Works

  1. This source rehydrates data previously stored by the Google Cloud Storage Destination.
  2. It will process both uncompressed JSON objects and objects compressed with gzip.
  3. You can authenticate to Google Cloud using the provided credentials, credentials_file, or by using Application Default Credentials.
  4. Your authentication credentials must have the Storage Admin permission to read and delete objects.

Notes

This is not a traditional source that continually produces data. Instead, it rehydrates all objects found within a specified time range. Once all objects in that time range have been rehydrated, the source will stop producing data.

Configuration

ParameterTypeDefaultDescription
telemetry_types*telemetrySelectorLogs, Metrics, TracesSpecifies which types of telemetry to rehydrate.
bucket_name*string""The name of the bucket to rehydrate from.
project_idstring""The ID of the Google Cloud project the bucket belongs to. Will be read from credentials if not configured.
auth_typeenumautoThe method used for authenticating to Google Cloud. Valid values are "auto", "json", or "file".
credentialsstring""JSON value from a Google Service Account credential file. Required if auth_type is "json".
credentials_filestring""Path to a Google Service Account credential file. Required if auth_type is "file".
starting_time*dateTime""The UTC start time for rehydration. Must be in the format "YYYY-MM-DDTHH:MM".
ending_time*dateTime""The UTC end time for rehydration. Must be in the format "YYYY-MM-DDTHH:MM".
folder_namestring""Restricts rehydration to objects in a specific folder within the bucket.
batch_sizeint30The number of objects to download at once. This impacts performance by controlling the number of concurrent object downloads.
delete_on_readboolfalseIf true, objects will be deleted after being rehydrated.
storage_enablebooltrueEnable to specify a storage extension for rehydration progress.
storage_directorystring$OIQ_OTEL_COLLECTOR_HOME/storageDirectory for storing rehydration state. Useful for maintaining state and resuming operations after disruptions.
*required field

Example Configurations

Basic Configuration

This configuration authenticates using Application Default Credentials and rehydrates data in the specified bucket, folder, and time range.

Web Interface

Bindplane docs - Google Cloud Storage Rehydration - image 1

Standalone Source

yaml
1apiVersion: bindplane.observiq.com/v1
2kind: Source
3metadata:
4  id: google_cloud_storage_rehydration
5  name: google_cloud_storage_rehydration
6spec:
7  type: google_cloud_storage_rehydration
8  parameters:
9    - name: telemetry_types
10      value: ['Logs', 'Metrics', 'Traces']
11    - name: bucket_name
12      value: 'my-bucket'
13    - name: auth_type
14      value: 'auto'
15    - name: starting_time
16      value: '2025-03-03T16:00'
17    - name: ending_time
18      value: '2025-03-03T17:00'
19    - name: folder_name
20      value: 'my-folder-name'
21    - name: batch_size
22      value: 30
23    - name: storage_enable
24      value: false

Complete Configuration

This configuration demonstrates all available options for the Google Cloud Storage Rehydration source, including authentication, storage settings, and delete on read functionality.

Standalone Source

yaml
1apiVersion: bindplane.observiq.com/v1
2kind: Source
3metadata:
4  id: google_cloud_storage_rehydration
5  name: google_cloud_storage_rehydration
6spec:
7  type: google_cloud_storage_rehydration
8  parameters:
9    - name: telemetry_types
10      value: ['Logs', 'Metrics', 'Traces']
11    - name: bucket_name
12      value: 'my-bucket'
13    - name: project_id
14      value: 'my-project'
15    - name: auth_type
16      value: 'file'
17    - name: credentials_file
18      value: '/path/to/googlecloud/credentials/file'
19    - name: starting_time
20      value: '2025-03-03T16:00'
21    - name: ending_time
22      value: '2025-03-03T17:00'
23    - name: folder_name
24      value: 'my-folder'
25    - name: batch_size
26      value: 30
27    - name: delete_on_read
28      value: true
29    - name: storage_enable
30      value: true
31    - name: storage_directory
32      value: '/custom/storage/path'