Create Data Repository Association
fsx_create_data_repository_association | R Documentation |
Creates an Amazon FSx for Lustre data repository association (DRA)¶
Description¶
Creates an Amazon FSx for Lustre data repository association (DRA). A
data repository association is a link between a directory on the file
system and an Amazon S3 bucket or prefix. You can have a maximum of 8
data repository associations on a file system. Data repository
associations are supported on all FSx for Lustre 2.12 and 2.15 file
systems, excluding scratch_1
deployment type.
Each data repository association must have a unique Amazon FSx file system directory and a unique S3 bucket or prefix associated with it. You can configure a data repository association for automatic import only, for automatic export only, or for both. To learn more about linking a data repository to your file system, see Linking your file system to an S3 bucket.
create_data_repository_association
isn't supported on Amazon File
Cache resources. To create a DRA on Amazon File Cache, use the
create_file_cache
operation.
Usage¶
fsx_create_data_repository_association(FileSystemId, FileSystemPath,
DataRepositoryPath, BatchImportMetaDataOnCreate, ImportedFileChunkSize,
S3, ClientRequestToken, Tags)
Arguments¶
FileSystemId
[required]
FileSystemPath
A path on the file system that points to a high-level directory (such as
/ns1/
) or subdirectory (such as/ns1/subdir/
) that will be mapped 1-1 withDataRepositoryPath
. The leading forward slash in the name is required. Two data repository associations cannot have overlapping file system paths. For example, if a data repository is associated with file system path/ns1/
, then you cannot link another data repository with file system path/ns1/ns2
.This path specifies where in your file system files will be exported from or imported to. This file system directory can be linked to only one Amazon S3 bucket, and no other S3 bucket can be linked to the directory.
If you specify only a forward slash (
/
) as the file system path, you can link only one data repository to the file system. You can only specify "/" as the file system path for the first data repository associated with a file system.DataRepositoryPath
[required] The path to the Amazon S3 data repository that will be linked to the file system. The path can be an S3 bucket or prefix in the format
s3://myBucket/myPrefix/
. This path specifies where in the S3 data repository files will be imported from or exported to.BatchImportMetaDataOnCreate
Set to
true
to run an import data repository task to import metadata from the data repository to the file system after the data repository association is created. Default isfalse
.ImportedFileChunkSize
For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
S3
The configuration for an Amazon S3 data repository linked to an Amazon FSx Lustre file system with a data repository association. The configuration defines which file events (new, changed, or deleted files or directories) are automatically imported from the linked data repository to the file system or automatically exported from the file system to the data repository.
ClientRequestToken
Tags
Value¶
A list with the following syntax:
list(
Association = list(
AssociationId = "string",
ResourceARN = "string",
FileSystemId = "string",
Lifecycle = "CREATING"|"AVAILABLE"|"MISCONFIGURED"|"UPDATING"|"DELETING"|"FAILED",
FailureDetails = list(
Message = "string"
),
FileSystemPath = "string",
DataRepositoryPath = "string",
BatchImportMetaDataOnCreate = TRUE|FALSE,
ImportedFileChunkSize = 123,
S3 = list(
AutoImportPolicy = list(
Events = list(
"NEW"|"CHANGED"|"DELETED"
)
),
AutoExportPolicy = list(
Events = list(
"NEW"|"CHANGED"|"DELETED"
)
)
),
Tags = list(
list(
Key = "string",
Value = "string"
)
),
CreationTime = as.POSIXct(
"2015-01-01"
),
FileCacheId = "string",
FileCachePath = "string",
DataRepositorySubdirectories = list(
"string"
),
NFS = list(
Version = "NFS3",
DnsIps = list(
"string"
),
AutoExportPolicy = list(
Events = list(
"NEW"|"CHANGED"|"DELETED"
)
)
)
)
)
Request syntax¶
svc$create_data_repository_association(
FileSystemId = "string",
FileSystemPath = "string",
DataRepositoryPath = "string",
BatchImportMetaDataOnCreate = TRUE|FALSE,
ImportedFileChunkSize = 123,
S3 = list(
AutoImportPolicy = list(
Events = list(
"NEW"|"CHANGED"|"DELETED"
)
),
AutoExportPolicy = list(
Events = list(
"NEW"|"CHANGED"|"DELETED"
)
)
),
ClientRequestToken = "string",
Tags = list(
list(
Key = "string",
Value = "string"
)
)
)