feat: Add AzureBlobStorage connection type support for stream connections#4357
Conversation
0fed114 to
7c789c6
Compare
There was a problem hiding this comment.
Pull request overview
Adds AzureBlobStorage as a supported stream connection type for the mongodbatlas_stream_connection Terraform resource and related data sources, including schema/model mapping, acceptance tests, and documentation/examples updates.
Changes:
- Extend stream connection schemas/models (codegen + provider) to include an
azureconfiguration block andAzureBlobStoragetype handling. - Update request/response mapping to support Azure-specific fields and
publicPrivateNetworkingmapping for networking. - Add acceptance tests, docs, examples, and changelog entries for the new connection type.
Reviewed changes
Copilot reviewed 15 out of 15 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| tools/codegen/models/stream_connection_api.yaml | Adds AzureBlobStorage type + azure fields and allows networking for the new type in codegen model. |
| internal/testutil/acc/pre_check.go | Adds acceptance test pre-check for Azure Blob Storage env vars. |
| internal/serviceapi/streamconnectionapi/resource_schema.go | Generated schema updated to include azure and allow it for AzureBlobStorage. |
| internal/service/streamconnection/resource_stream_connection.go | Adds connection-type constants and Azure object model/type definitions. |
| internal/service/streamconnection/resource_schema.go | Adds AzureBlobStorage schema block and expands type validation list. |
| internal/service/streamconnection/model_stream_connection.go | Maps Azure fields to/from SDK and routes networking to PublicPrivateNetworking for Azure. |
| internal/service/streamconnection/resource_stream_connection_test.go | Adds AzureBlobStorage acceptance test and helpers. |
| internal/service/streamconnection/model_stream_connection_test.go | Adds unit test coverage for Azure SDK<->TF mapping and TF->SDK request creation. |
| examples/mongodbatlas_stream_connection/* | Adds example configuration and variables for Azure Blob Storage connection. |
| docs/resources/stream_connection.md | Adds Azure example and documents Azure arguments/type. |
| docs/data-sources/stream_connections.md | Documents Azure in stream connections data source. |
| docs/data-sources/stream_connection.md | Documents Azure in stream connection data source. |
| .changelog/4356.txt | Adds release notes for AzureBlobStorage support and PRIVATE_LINK networking. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
87e352b to
957e89c
Compare
|
APIx bot: a message has been sent to Docs Slack channel |
🤖 Augment PR SummarySummary: Adds first-class support for Changes:
Technical Notes: Azure networking uses the public/private networking API shape while still exposing a consistent Terraform 🤖 Was this summary useful? React with 👍 or 👎 |
| "strings" | ||
| "testing" | ||
|
|
||
| "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" |
There was a problem hiding this comment.
| "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" | |
| "github.com/hashicorp/terraform-plugin-testing/helper/acctest" |
| "storage_account_name": schema.StringAttribute{ | ||
| Required: true, | ||
| }, | ||
| "region": schema.StringAttribute{ |
There was a problem hiding this comment.
do we need a UseStateForUnknown plan modifier? How does the plan show if anything is changed in this object?
There was a problem hiding this comment.
since region is computed each time the storage account changes, UseStateForUnknown may cause errors if the planned value doesn't match what the API returns
| "storage_account_name": schema.StringAttribute{ | ||
| Required: true, | ||
| }, | ||
| "region": schema.StringAttribute{ |
There was a problem hiding this comment.
why is this optional/computed?
There was a problem hiding this comment.
it's in the API spec as optional, so it's computed if users don't supply a value. In the UI there's not even a field for users to add it so it would only be something that the api/sdk/tf supports
1a726f0 to
f423200
Compare
| }, | ||
| } | ||
| default: | ||
| return nil, diag.Diagnostics{diag.NewErrorDiagnostic( |
There was a problem hiding this comment.
one nit but I think right now we're never erroring, right? no matter what's the type we assume is StreamsKafkaNetworking
There was a problem hiding this comment.
right that's the existing behavior, and PublicPrivateNetworking was introduced in the last SDK update (so as of yet will only be used for Azure blob storage and GCP pubsub)
f423200 to
1040cec
Compare
| } | ||
|
|
||
| variable "azure_region" { | ||
| description = "Azure region where you locate the storage account" |
There was a problem hiding this comment.
What format is used? The atlas US_EAST_1 or azure style eastus1?
There was a problem hiding this comment.
azure style eastus1
There was a problem hiding this comment.
Can you add that example to the docs? (Common source of confusion)
EspenAlbert
left a comment
There was a problem hiding this comment.
Looks quite good. Would maybe consider an example to demonstrate the privatelink functionality.
Main uncertainty I have is about the stream_connection not using role_id, seems weird to specify the service_principal_id twice
| connection_name = %[10]q | ||
| type = "AzureBlobStorage" | ||
| azure = { | ||
| service_principal_id = %[4]q |
There was a problem hiding this comment.
This setup confuses me a bit.
Why is the azure not referencing the mongodbatlas_cloud_provider_access_setup.azure_setup.role_id?
Seems a bit inconsistent that the atlas resource require azure service_principal_id?
There was a problem hiding this comment.
the way the API is set up, the input is service_principal_id and then role_id is computed
There was a problem hiding this comment.
IMO: This is a bad experience. role_id is what should be configured, but understand it is a big ask to change the API :/
There was a problem hiding this comment.
I think it has to do with the way the UI is set up, where users will configure their Azure integration and get a service principal ID, and then on the streams connection manager page they choose between their service principal IDs (since there may be multiple), and role ID is abstracted away
| ## Attributes Reference | ||
|
|
||
| * `type` - Type of connection. Can be `AWSLambda`, `Cluster`, `GCPPubSub`, `Https`, `Kafka`, `Sample`, or `SchemaRegistry`. | ||
| * `type` - Type of connection. Can be `AWSKinesisDataStreams`, `AWSLambda`, `AzureBlobStorage`, `Cluster`, `GCPPubSub`, `Https`, `Kafka`, `S3`, `Sample`, or `SchemaRegistry`. |
There was a problem hiding this comment.
looks like some types were missing that are not related to this PR change, consider either doing it in a different PR or at least mention it in this PR description. same for the other files
There was a problem hiding this comment.
removed from this PR
| ### AWS | ||
| * `role_arn` - Amazon Resource Name (ARN) that identifies the Amazon Web Services (AWS) Identity and Access Management (IAM) role that MongoDB Cloud assumes when it accesses resources in your AWS account. | ||
|
|
||
| * `service_principal_id` - (Required) UUID that identifies the Azure Service Principal used to access the Azure Blob Storage account. |
There was a problem hiding this comment.
new 3 attrs should be in Azure section instead of AWS section
| * `security` - Properties for the secure transport connection to Kafka. For SASL_SSL, this can include the trusted certificate to use. See [security](#security). | ||
| * `networking` - Networking Access Type can either be `PUBLIC` (default) or `VPC`. See [networking](#networking). | ||
|
|
||
| If `type` is `AzureBlobStorage` the the configuration defines the following additional attributes: |
There was a problem hiding this comment.
| If `type` is `AzureBlobStorage` the the configuration defines the following additional attributes: | |
| If `type` is `AzureBlobStorage` the configuration defines the following additional attributes: |
| resourceChecks := []resource.TestCheckFunc{ | ||
| checkStreamConnectionExists(), | ||
| resource.TestCheckResourceAttrSet(resourceName, "project_id"), | ||
| resource.TestCheckResourceAttr(resourceName, "workspace_name", workspaceName), |
There was a problem hiding this comment.
we're in TPF so no need to check customer-provided values, only useful to check Computed attributes
| * `instance_name` - (Optional, Deprecated) Label that identifies the stream processing workspace. Use `workspace_name` instead; this attribute will be removed in a future major version. | ||
| * `connection_name` - (Required) Label that identifies the stream connection. In the case of the Sample type, this is the name of the sample source. | ||
| * `type` - (Required) Type of connection. Can be `AWSLambda`, `Cluster`, `GCPPubSub`, `Https`, `Kafka`, `Sample`, or `SchemaRegistry`. | ||
| * `type` - (Required) Type of connection. Can be `AWSKinesisDataStreams`, `AWSLambda`, `AzureBlobStorage`, `Cluster`, `GCPPubSub`, `Https`, `Kafka`, `S3`, `Sample`, or `SchemaRegistry`. |
There was a problem hiding this comment.
nit: i guess it's because of the Atlas API, there is a naming inconsistency, prefixing with AWS/Azure sometimes (e.g., AWSKinesisDataStreams, AWSLambda, AzureBlobStorage) but not with S3
There was a problem hiding this comment.
yeah we had a discussion about this on the streams team since S3 can be used by other non-AWS providers too
|
@EspenAlbert the private link support is done in a separate PR that I just put up for review here: #4366 I can remove the line from the changelog in this PR if that makes it confusing |
|
|
||
| resource.ParallelTest(t, resource.TestCase{ | ||
| PreCheck: func() { acc.PreCheckBasic(t); acc.PreCheckLogIntegrationEnvAzure(t) }, | ||
| PreCheck: func() { acc.PreCheckAzureEnvWithServicePrincipal(t) }, |
There was a problem hiding this comment.
notice how
Can we refactor into acc package? see code/provider/internal/testutil/acc/stream_instance.go for existing example
EspenAlbert
left a comment
There was a problem hiding this comment.
Thank you for addressing the comments!
Yes, that would be great 👍 |
Description
Adds support for
AzureBlobStorageas a new stream connection type inmongodbatlas_stream_connectionresource and data sources.Link to any related issue(s): CLOUDP-383606
Type of change:
Required Checklist: