Skip to content

Create Kx Cluster

finspace_create_kx_cluster R Documentation

Creates a new kdb cluster

Description

Creates a new kdb cluster.

Usage

finspace_create_kx_cluster(clientToken, environmentId, clusterName,
  clusterType, tickerplantLogConfiguration, databases,
  cacheStorageConfigurations, autoScalingConfiguration,
  clusterDescription, capacityConfiguration, releaseLabel,
  vpcConfiguration, initializationScript, commandLineArguments, code,
  executionRole, savedownStorageConfiguration, azMode, availabilityZoneId,
  tags, scalingGroupConfiguration)

Arguments

clientToken

A token that ensures idempotency. This token expires in 10 minutes.

environmentId

[required] A unique identifier for the kdb environment.

clusterName

[required] A unique name for the cluster that you want to create.

clusterType

[required] Specifies the type of KDB database that is being created. The following types are available:

  • HDB – A Historical Database. The data is only accessible with read-only permissions from one of the FinSpace managed kdb databases mounted to the cluster.

  • RDB – A Realtime Database. This type of database captures all the data from a ticker plant and stores it in memory until the end of day, after which it writes all of its data to a disk and reloads the HDB. This cluster type requires local storage for temporary storage of data during the savedown process. If you specify this field in your request, you must provide the savedownStorageConfiguration parameter.

  • GATEWAY – A gateway cluster allows you to access data across processes in kdb systems. It allows you to create your own routing logic using the initialization scripts and custom code. This type of cluster does not require a writable local storage.

  • GP – A general purpose cluster allows you to quickly iterate on code during development by granting greater access to system commands and enabling a fast reload of custom code. This cluster type can optionally mount databases including cache and savedown storage. For this cluster type, the node count is fixed at 1. It does not support autoscaling and supports only SINGLE AZ mode.

  • Tickerplant – A tickerplant cluster allows you to subscribe to feed handlers based on IAM permissions. It can publish to RDBs, other Tickerplants, and real-time subscribers (RTS). Tickerplants can persist messages to log, which is readable by any RDB environment. It supports only single-node that is only one kdb process.

tickerplantLogConfiguration

A configuration to store Tickerplant logs. It consists of a list of volumes that will be mounted to your cluster. For the cluster type Tickerplant, the location of the TP volume on the cluster will be available by using the global variable .aws.tp_log_path.

databases

A list of databases that will be available for querying.

cacheStorageConfigurations

The configurations for a read only cache storage associated with a cluster. This cache will be stored as an FSx Lustre that reads from the S3 store.

autoScalingConfiguration

The configuration based on which FinSpace will scale in or scale out nodes in your cluster.

clusterDescription

A description of the cluster.

capacityConfiguration

A structure for the metadata of a cluster. It includes information like the CPUs needed, memory of instances, and number of instances.

releaseLabel

[required] The version of FinSpace managed kdb to run.

vpcConfiguration

[required] Configuration details about the network where the Privatelink endpoint of the cluster resides.

initializationScript

Specifies a Q program that will be run at launch of a cluster. It is a relative path within .zip file that contains the custom code, which will be loaded on the cluster. It must include the file name itself. For example, somedir/init.q.

commandLineArguments

Defines the key-value pairs to make them available inside the cluster.

code

The details of the custom code that you want to use inside a cluster when analyzing a data. It consists of the S3 source bucket, location, S3 object version, and the relative path from where the custom code is loaded into the cluster.

executionRole

An IAM role that defines a set of permissions associated with a cluster. These permissions are assumed when a cluster attempts to access another cluster.

savedownStorageConfiguration

The size and type of the temporary storage that is used to hold data during the savedown process. This parameter is required when you choose clusterType as RDB. All the data written to this storage space is lost when the cluster node is restarted.

azMode

[required] The number of availability zones you want to assign per cluster. This can be one of the following

  • SINGLE – Assigns one availability zone per cluster.

  • MULTI – Assigns all the availability zones per cluster.

availabilityZoneId

The availability zone identifiers for the requested regions.

tags

A list of key-value pairs to label the cluster. You can add up to 50 tags to a cluster.

scalingGroupConfiguration

The structure that stores the configuration details of a scaling group.

Value

A list with the following syntax:

list(
  environmentId = "string",
  status = "PENDING"|"CREATING"|"CREATE_FAILED"|"RUNNING"|"UPDATING"|"DELETING"|"DELETED"|"DELETE_FAILED",
  statusReason = "string",
  clusterName = "string",
  clusterType = "HDB"|"RDB"|"GATEWAY"|"GP"|"TICKERPLANT",
  tickerplantLogConfiguration = list(
    tickerplantLogVolumes = list(
      "string"
    )
  ),
  volumes = list(
    list(
      volumeName = "string",
      volumeType = "NAS_1"
    )
  ),
  databases = list(
    list(
      databaseName = "string",
      cacheConfigurations = list(
        list(
          cacheType = "string",
          dbPaths = list(
            "string"
          ),
          dataviewName = "string"
        )
      ),
      changesetId = "string",
      dataviewName = "string",
      dataviewConfiguration = list(
        dataviewName = "string",
        dataviewVersionId = "string",
        changesetId = "string",
        segmentConfigurations = list(
          list(
            dbPaths = list(
              "string"
            ),
            volumeName = "string",
            onDemand = TRUE|FALSE
          )
        )
      )
    )
  ),
  cacheStorageConfigurations = list(
    list(
      type = "string",
      size = 123
    )
  ),
  autoScalingConfiguration = list(
    minNodeCount = 123,
    maxNodeCount = 123,
    autoScalingMetric = "CPU_UTILIZATION_PERCENTAGE",
    metricTarget = 123.0,
    scaleInCooldownSeconds = 123.0,
    scaleOutCooldownSeconds = 123.0
  ),
  clusterDescription = "string",
  capacityConfiguration = list(
    nodeType = "string",
    nodeCount = 123
  ),
  releaseLabel = "string",
  vpcConfiguration = list(
    vpcId = "string",
    securityGroupIds = list(
      "string"
    ),
    subnetIds = list(
      "string"
    ),
    ipAddressType = "IP_V4"
  ),
  initializationScript = "string",
  commandLineArguments = list(
    list(
      key = "string",
      value = "string"
    )
  ),
  code = list(
    s3Bucket = "string",
    s3Key = "string",
    s3ObjectVersion = "string"
  ),
  executionRole = "string",
  lastModifiedTimestamp = as.POSIXct(
    "2015-01-01"
  ),
  savedownStorageConfiguration = list(
    type = "SDS01",
    size = 123,
    volumeName = "string"
  ),
  azMode = "SINGLE"|"MULTI",
  availabilityZoneId = "string",
  createdTimestamp = as.POSIXct(
    "2015-01-01"
  ),
  scalingGroupConfiguration = list(
    scalingGroupName = "string",
    memoryLimit = 123,
    memoryReservation = 123,
    nodeCount = 123,
    cpu = 123.0
  )
)

Request syntax

svc$create_kx_cluster(
  clientToken = "string",
  environmentId = "string",
  clusterName = "string",
  clusterType = "HDB"|"RDB"|"GATEWAY"|"GP"|"TICKERPLANT",
  tickerplantLogConfiguration = list(
    tickerplantLogVolumes = list(
      "string"
    )
  ),
  databases = list(
    list(
      databaseName = "string",
      cacheConfigurations = list(
        list(
          cacheType = "string",
          dbPaths = list(
            "string"
          ),
          dataviewName = "string"
        )
      ),
      changesetId = "string",
      dataviewName = "string",
      dataviewConfiguration = list(
        dataviewName = "string",
        dataviewVersionId = "string",
        changesetId = "string",
        segmentConfigurations = list(
          list(
            dbPaths = list(
              "string"
            ),
            volumeName = "string",
            onDemand = TRUE|FALSE
          )
        )
      )
    )
  ),
  cacheStorageConfigurations = list(
    list(
      type = "string",
      size = 123
    )
  ),
  autoScalingConfiguration = list(
    minNodeCount = 123,
    maxNodeCount = 123,
    autoScalingMetric = "CPU_UTILIZATION_PERCENTAGE",
    metricTarget = 123.0,
    scaleInCooldownSeconds = 123.0,
    scaleOutCooldownSeconds = 123.0
  ),
  clusterDescription = "string",
  capacityConfiguration = list(
    nodeType = "string",
    nodeCount = 123
  ),
  releaseLabel = "string",
  vpcConfiguration = list(
    vpcId = "string",
    securityGroupIds = list(
      "string"
    ),
    subnetIds = list(
      "string"
    ),
    ipAddressType = "IP_V4"
  ),
  initializationScript = "string",
  commandLineArguments = list(
    list(
      key = "string",
      value = "string"
    )
  ),
  code = list(
    s3Bucket = "string",
    s3Key = "string",
    s3ObjectVersion = "string"
  ),
  executionRole = "string",
  savedownStorageConfiguration = list(
    type = "SDS01",
    size = 123,
    volumeName = "string"
  ),
  azMode = "SINGLE"|"MULTI",
  availabilityZoneId = "string",
  tags = list(
    "string"
  ),
  scalingGroupConfiguration = list(
    scalingGroupName = "string",
    memoryLimit = 123,
    memoryReservation = 123,
    nodeCount = 123,
    cpu = 123.0
  )
)