KeyVault issues in Azure Europe West region

2025-08-26 10:50 UTC

We're observing ongoing issues with Azure Key Vault in the Europe West region. Microsoft is reporting unavailability of Key Vaults across the region, which may also impact the Keboola platform.

We're actively monitoring the situation and will share updates as they come.


2025-08-26 11:15 UTC

All issues observed so far were related to CMK (Client Managed Keys). Affected jobs are failing with error similar to:

Error received from the customer managed key (CMK) provider: 'Cannot invoke "com.microsoft.azure.keyvault.models.KeyVaultError.error()" because the return value of "com.microsoft.azure.keyvault.models.KeyVaultErrorException.body()" is null'


2025-08-26 12:15 UTC

The Azure Key Vault issues in the Europe West region have been resolved. We are no longer observing any related job errors or delays.


Thank you for your patience while we monitored this incident.

New tables created without native types

We identified an issue where all NEW tables created between 2025-08-22 11:00 UTC and 2025-08-25 10:45 UTC were created without native types (untyped). We have reverted to the previous working version and continue investigation.

UPDATE 13:50 UTC: We have completed an impact analysis of the issue. The affected tables have been identified, and we will be contacting the impacted customers directly.


Failing BingAds Extractor jobs

Since August 25th, we are seeing failures in the BingAds Extractor due to authorization errors. Affected jobs are failing with the message: “Authorization failed, please try to reauthorize the configuration.”

Our team is actively working on a fix. We will provide the next update within 30 minutes.

UPDATE (2025-08-25 09:04 UTC  ) We have fixed the issue and are continuing to monitor it.


UPDATE (2025-08-25 11:30 UTC:) The BingAds Extractor issue is reoccurring and we are continuing our investigation.

UPDATE (2025-08-25 12:00 UTC:)  After applying a further fix, we are no longer seeing the issue reoccurring. We continue to monitor the service.

Internal Errors on Snowflake transformations

Since approximately 18. 8. 2025 22:00 we're seeing sporadic errors of Snowflake transformation ending with Internal error. These issues appear to be linked to a recent underlying change in the Snowflake database, specifically affecting queries that use CREATE VIEW statements with computed columns. The error may occur across any stack.

If your transformation is using a query similar to this:

CREATE OR REPLACE VIEW TEST_W AS SELECT concat("id", 'TEST','TEST') AS id FROM TEST;

and fails with an Internal error, the following workarounds may resolve the issue:

Use a table instead of a view: 

CREATE OR REPLACE TABLE TEST_T AS SELECT concat("id", 'TEST','TEST') AS id FROM TEST;

Explicitly cast the computed column to a valid VARCHAR length:

CREATE OR REPLACE VIEW TEST_W AS SELECT concat("id", 'TEST','TEST')::VARCHAR(16777216) AS id FROM TEST;


Additional Notes:

Transformations already using CREATE (OR REPLACE) TABLE statements are not affected.

Transformations that do not use computed columns, or that cast computed columns explicitly, are also unaffected.


If you're uncertain whether your transformation is affected (we fully acknowledge that the error message is not informative), please don’t hesitate to reach out to our Support team. They’ll help confirm whether this issue applies to your case.

We’re actively collaborating with Snowflake to resolve the problem. In parallel, we’re investigating potential fixes on our platform to mitigate the impact.

We sincerely apologize for the inconvenience and appreciate your patience. Next update will be provided in 4 hours.

Update: 19.8.2025 13:40 UTC: We're actively working on this with Snowflake, but we don't have an estimated timeline for results yet. Next update will be provided in 4 hours.

Update: 19.8.2025 17:24 UTC: We have confirmed that the root cause of this is an unexpected side effect of a change in a recent Snowflake release. Next update will be provided 20.8.

Update: 20.8.2025 13:30 UTC: Snowflake is working on reverting the change for affected accounts. Next update will be provided 21.8.

Update: 21.8.2025 07:25 UTC: The change was gradually reverted on Snowflake side, we have observed no occurrences of this issue approximately since 20.8. 19:00. This means that the incident is resolved and we will continue working with Snowflake on how to re-enable the change in future in a safe way.


Planned partial maintenance on Saturday, August 16, 2025 for all Azure stacks

The announced partial maintenance has just started. The platform has been scaled down and is no longer accepting new jobs. We expect a brief downtime in 10 minutes, around 06:00 UTC. We will update this post with the progress of the partial maintenance.

Update 06:00 UTC: The announced partial maintenance is ongoing, and we are expecting downtime to begin any minute now. We will continue to update this post as the maintenance progresses.

Update 06:10 UTC: The maintenance has been completed, and all services have been scaled back up. The platform is fully operational, and jobs are now being processed as usual. All delayed jobs will be processed shortly. Thank you for your patience.

Scheduled Partial Maintenance of all GCP stacks – August 30, 2025

We would like to inform you about the planned maintenance of all Keboola stacks hosted on GCP.

During the database upgrades there will be a short service outage on all GCP stacks, including all single-tenant stacks and GCP US and EU multi-tenant stacks (connection.us-east4.gcp.keboola.comconnection.europe-west3.gcp.keboola.com). This will take place on Saturday, August 30, 2025 between 05:30 and 06:30 UTC.

Effects of the Maintenance

During the above period, services will be scaled down and the processing of jobs may be delayed. For a very brief period (at around 06:00 UTC) the service will be unavailable for up to 10 minutes and APIs may respond with a 500 error code. After that, all services will scale up and start processing all jobs. No running jobs, data apps, or workspaces will be affected. Delayed scheduled flows and queued jobs will resume after the maintenance is completed.

Detailed Schedule

  • 05:30–06:00 UTC: processing of new jobs stops.
  • 06:00–06:15 UTC: service enhancement period.
  • 06:15 UTC: processing of jobs resumes.


Stuck jobs - AWS EU and AWS US

2025-08-11 14:23 UTC - We are currently investigating an issue with stuck jobs in the AWS EU (eu-central) and AWS US (us-east) regions. Jobs remain in the "created" state.

2025-08-11 16:15 UTC

In the AWS EU region, we have identified approximately 25 jobs that became stuck and, unfortunately, could not be recovered. These jobs have ended in an error state.

In the AWS US region, we have identified approximately 350 jobs that were stuck. Fortunately, in this case, we were able to fix them, and they are expected to be completed by around 15:40 UTC.

Additionally, scheduled orchestrations in the AWS US region have been delayed since 14:30 UTC, but all are expected to run within the next 30 minutes.

We apologize for the inconvenience and appreciate your patience while we worked to resolve this issue.

Scheduled Partial Maintenance of all Azure stacks – August 16, 2025

We would like to inform you about the planned maintenance of all Keboola stacks hosted on Azure.

During the database upgrades there will be a short service outage on all Azure stacks, including all single-tenant stacks and Azure North Europe multi-tenant stack (connection.north-europe.azure.keboola.com). This will take place on Saturday, August 16, 2025 between 05:50 and 06:30 UTC.

Effects of the Maintenance

During the above period, services will be scaled down and the processing of jobs may be delayed. For a very brief period (at around 06:00 UTC) the service will be unavailable for up to 10 minutes and APIs may respond with a 500 error code. After that, all services will scale up and start processing all jobs. No running jobs, data apps, or workspaces will be affected. Delayed scheduled flows and queued jobs will resume after the maintenance is completed.

Detailed Schedule

  • 05:50–06:00 UTC: processing of new jobs stops.
  • 06:00–06:15 UTC: service enhancement period.
  • 06:15 UTC: processing of jobs resumes.



Snowflake Transformation Component Failure

2025-07-28 14:15 UTC

We are currently experiencing issues with the Snowflake Transformation component. Jobs are failing with the following error:

"Actual statement count X did not match the desired statement count Y."

The problem began occurring at approximately 11:00 UTC.

Our engineering team is actively investigating the issue. We will share more information and updates as soon as we have them.

Thank you for your patience and understanding.


2025-07-28 15:30 UTC

We’ve identified the root cause of the issue affecting the Snowflake Transformation component.

The problem was introduced by a bug in a release deployed today at 11:00 UTC. This release has since been rolled back, and new failures should no longer occur.

However, please note that any Snowflake Transformation configurations edited during the affected window may still be impacted and require manual correction. If you made any changes to Snowflake transformations between 11:00 and 15:00 UTC, we recommend reviewing them.


Fix Instructions

To resolve the issue caused by the bug, navigate to the affected transformation configuration detail, click the "EDIT ALL QUERIES" button and make a small change (e.g., add a space at the end of one of the queries) to re-save the entire configuration.


⚠️ Please note:
This action will only fix the query formatting issue introduced by the bug. If you made additional changes to the queries in the meantime—such as attempting to fix broken queries manually—you may still need to manually review and correct individual queries. In such cases, it may be easier to roll back to the last known working configuration version instead.


Thank you for your patience while we worked to resolve this.

Failing jobs on GCP West 3

We are currently experiencing an issue with one of our nodes running jobs.
Our team is actively working to resolve the situation.
We will provide an update once the issue has been addressed.

We're sorry for any inconvenience. Thank you for your understanding.

Update 15:07 23.07.2025 UTC

The affected node has been replaced, jobs will now run normally as expected.