After more than 11 years of sharing updates on this blog, it’s time for an upgrade.
All future status updates will now live on our new dedicated status page: keboolastatus.com.
Please bookmark the new page to stay informed and enjoy the improved features. No action needed if you’re subscribed by email — all subscriptions have been moved, and email updates will continue without interruption.
Thank you for following along here — see you at our new home!
We are currently investigating issues when creating new Python and R Workspace with Large backend on AWS stacks.
Next update in 30 minutes.
We're sorry for this inconvenience.
UPDATE 11:29:03 UTC The issue has been resolved. All workspace operations returned to normal.
]]>The announced partial maintenance has just started. The platform has been scaled down and is no longer accepting new jobs. We expect a brief downtime in 10 minutes, around 06:00 UTC. We will update this post with the progress of the partial maintenance.
Update 06:00 UTC: The announced partial maintenance is ongoing, and we are expecting downtime to begin any minute now. We will continue to update this post as the maintenance progresses.
Update 06:13 UTC: The maintenance has been completed, and all services have been scaled back up. The platform is fully operational, and jobs are now being processed as usual. All delayed jobs will be processed shortly. Thank you for your patience.
We are currently investigating issues when creating new Pay As You Go projects on https://connection.us-east4.gcp.keboola.com/.
UPDATE 08:13 UTC: The issue was resolved, Pay As You Go sign ups are now fully operational.
]]>We are observing a major degradation in the Azure North Europe stack (connection.north-europe.azure.keboola.com). The root cause is still unknown and investigation is ongoing. Users may encounter various errors in the UI. Mitigations have been initiated.
Update 19:08 UTC
We identified issues on one of our Kubernetes nodes. The node was taken out of service, and all systems are now operating normally. No jobs were lost or failed; the impact was limited to visible error messages in the user interface. We apologize for the inconvenience caused.
We would like to inform you about the planned maintenance of all Keboola stacks hosted on Azure.
During the database upgrades there will be a short service outage on all Azure stacks, including all single-tenant stacks and Azure North Europe multi-tenant stack (connection.north-europe.azure.keboola.com). This will take place on Saturday, September 20, 2025 between 05:50 and 06:30 UTC.
Effects of the Maintenance
During the above period, services will be scaled down and the processing of jobs may be delayed. For a very brief period (at around 06:00 UTC) the service will be unavailable for up to 10 minutes and APIs may respond with a 500 error code. After that, all services will scale up and start processing all jobs. No running jobs, data apps, or workspaces will be affected. Delayed scheduled flows and queued jobs will resume after the maintenance is completed.
Detailed Schedule
We're investigating an outage in Project Consumption dashboard on connection.us-east4.gcp.keboola.com stack.
UPDATE 07:10 UTC: Project Consumption and Organization Usage dashboards are back online.
We're sorry for this inconvenience.
]]>The announced partial maintenance has just started. The platform has been scaled down and is no longer accepting new jobs. We expect a brief downtime in 20 minutes, around 06:00 UTC. We will update this post with the progress of the partial maintenance.
Update 06:00 UTC: The announced partial maintenance is ongoing, and we are expecting downtime to begin any minute now. We will continue to update this post as the maintenance progresses.
Update 06:26 UTC: The maintenance has been completed, and all services have been scaled back up. The platform is fully operational, and jobs are now being processed as usual. All delayed jobs will be processed shortly. Thank you for your patience.
Due to an underlying infrastructure issue, some jobs failed prematurely in North Europe (Azure). The root cause is still under investigation. Please contact support if your job was affected. We apologize for the disruption and thank you for your understanding.
The jobs should be running again as usual
]]>In the beta version of conditional flows, when a flow is executed in the development branch, the child jobs are mistakenly executed in the main (production) branch.
We are already investigating this issue and preparing a fix. If your production data was affected, we sincerely apologize for the inconvenience.
2025-08-27T13:54:39 UTC – We have deployed a fix that ensures conditional flows created on a branch are executed within that same branch, including storage, component configuration, transformations, or data apps version from the branch where the conditional flow was created. As a result, jobs from the development branch are no longer running in main (production) with main configurations.
]]>2025-08-26 10:50 UTC
We're observing ongoing issues with Azure Key Vault in the Europe West region. Microsoft is reporting unavailability of Key Vaults across the region, which may also impact the Keboola platform.
We're actively monitoring the situation and will share updates as they come.
2025-08-26 11:15 UTC
All issues observed so far were related to CMK (Client Managed Keys). Affected jobs are failing with error similar to:
Error received from the customer managed key (CMK) provider: 'Cannot invoke "com.microsoft.azure.keyvault.models.KeyVaultError.error()" because the return value of "com.microsoft.azure.keyvault.models.KeyVaultErrorException.body()" is null'
2025-08-26 12:15 UTC
The Azure Key Vault issues in the Europe West region have been resolved. We are no longer observing any related job errors or delays.
Thank you for your patience while we monitored this incident.
We identified an issue where all NEW tables created between 2025-08-22 11:00 UTC and 2025-08-25 10:45 UTC were created without native types (untyped). We have reverted to the previous working version and continue investigation.
UPDATE 13:50 UTC: We have completed an impact analysis of the issue. The affected tables have been identified, and we will be contacting the impacted customers directly.
Since August 25th, we are seeing failures in the BingAds Extractor due to authorization errors. Affected jobs are failing with the message: “Authorization failed, please try to reauthorize the configuration.”
Our team is actively working on a fix. We will provide the next update within 30 minutes.
UPDATE (2025-08-25 09:04 UTC ) We have fixed the issue and are continuing to monitor it.Since approximately 18. 8. 2025 22:00 we're seeing sporadic errors of Snowflake transformation ending with Internal error. These issues appear to be linked to a recent underlying change in the Snowflake database, specifically affecting queries that use CREATE VIEW statements with computed columns. The error may occur across any stack.
If your transformation is using a query similar to this:
CREATE OR REPLACE VIEW TEST_W AS SELECT concat("id", 'TEST','TEST') AS id FROM TEST;
and fails with an Internal error, the following workarounds may resolve the issue:
Use a table instead of a view:
CREATE OR REPLACE TABLE TEST_T AS SELECT concat("id", 'TEST','TEST') AS id FROM TEST;
Explicitly cast the computed column to a valid VARCHAR length:
CREATE OR REPLACE VIEW TEST_W AS SELECT concat("id", 'TEST','TEST')::VARCHAR(16777216) AS id FROM TEST;
Additional Notes:
Transformations already using CREATE (OR REPLACE) TABLE statements are not affected.
Transformations that do not use computed columns, or that cast computed columns explicitly, are also unaffected.
If you're uncertain whether your transformation is affected (we fully acknowledge that the error message is not informative), please don’t hesitate to reach out to our Support team. They’ll help confirm whether this issue applies to your case.
We’re actively collaborating with Snowflake to resolve the problem. In parallel, we’re investigating potential fixes on our platform to mitigate the impact.
We sincerely apologize for the inconvenience and appreciate your patience. Next update will be provided in 4 hours.
Update: 19.8.2025 13:40 UTC: We're actively working on this with Snowflake, but we don't have an estimated timeline for results yet. Next update will be provided in 4 hours.
Update: 19.8.2025 17:24 UTC: We have confirmed that the root cause of this is an unexpected side effect of a change in a recent Snowflake release. Next update will be provided 20.8.
Update: 20.8.2025 13:30 UTC: Snowflake is working on reverting the change for affected accounts. Next update will be provided 21.8.
Update: 21.8.2025 07:25 UTC: The change was gradually reverted on Snowflake side, we have observed no occurrences of this issue approximately since 20.8. 19:00. This means that the incident is resolved and we will continue working with Snowflake on how to re-enable the change in future in a safe way.
The announced partial maintenance has just started. The platform has been scaled down and is no longer accepting new jobs. We expect a brief downtime in 10 minutes, around 06:00 UTC. We will update this post with the progress of the partial maintenance.
Update 06:00 UTC: The announced partial maintenance is ongoing, and we are expecting downtime to begin any minute now. We will continue to update this post as the maintenance progresses.
Update 06:10 UTC: The maintenance has been completed, and all services have been scaled back up. The platform is fully operational, and jobs are now being processed as usual. All delayed jobs will be processed shortly. Thank you for your patience.
We would like to inform you about the planned maintenance of all Keboola stacks hosted on GCP.
During the database upgrades there will be a short service outage on all GCP stacks, including all single-tenant stacks and GCP US and EU multi-tenant stacks (connection.us-east4.gcp.keboola.com, connection.europe-west3.gcp.keboola.com). This will take place on Saturday, August 30, 2025 between 05:30 and 06:30 UTC.
Effects of the Maintenance
During the above period, services will be scaled down and the processing of jobs may be delayed. For a very brief period (at around 06:00 UTC) the service will be unavailable for up to 10 minutes and APIs may respond with a 500 error code. After that, all services will scale up and start processing all jobs. No running jobs, data apps, or workspaces will be affected. Delayed scheduled flows and queued jobs will resume after the maintenance is completed.
Detailed Schedule
2025-08-11 14:23 UTC - We are currently investigating an issue with stuck jobs in the AWS EU (eu-central) and AWS US (us-east) regions. Jobs remain in the "created" state.
2025-08-11 16:15 UTC
In the AWS EU region, we have identified approximately 25 jobs that became stuck and, unfortunately, could not be recovered. These jobs have ended in an error state.
In the AWS US region, we have identified approximately 350 jobs that were stuck. Fortunately, in this case, we were able to fix them, and they are expected to be completed by around 15:40 UTC.
Additionally, scheduled orchestrations in the AWS US region have been delayed since 14:30 UTC, but all are expected to run within the next 30 minutes.
We apologize for the inconvenience and appreciate your patience while we worked to resolve this issue.
We would like to inform you about the planned maintenance of all Keboola stacks hosted on Azure.
During the database upgrades there will be a short service outage on all Azure stacks, including all single-tenant stacks and Azure North Europe multi-tenant stack (connection.north-europe.azure.keboola.com). This will take place on Saturday, August 16, 2025 between 05:50 and 06:30 UTC.
Effects of the Maintenance
During the above period, services will be scaled down and the processing of jobs may be delayed. For a very brief period (at around 06:00 UTC) the service will be unavailable for up to 10 minutes and APIs may respond with a 500 error code. After that, all services will scale up and start processing all jobs. No running jobs, data apps, or workspaces will be affected. Delayed scheduled flows and queued jobs will resume after the maintenance is completed.
Detailed Schedule
2025-07-28 14:15 UTC
We are currently experiencing issues with the Snowflake Transformation component. Jobs are failing with the following error:
"Actual statement count X did not match the desired statement count Y."
The problem began occurring at approximately 11:00 UTC.
Our engineering team is actively investigating the issue. We will share more information and updates as soon as we have them.
Thank you for your patience and understanding.
2025-07-28 15:30 UTC
We’ve identified the root cause of the issue affecting the Snowflake Transformation component.
The problem was introduced by a bug in a release deployed today at 11:00 UTC. This release has since been rolled back, and new failures should no longer occur.
However, please note that any Snowflake Transformation configurations edited during the affected window may still be impacted and require manual correction. If you made any changes to Snowflake transformations between 11:00 and 15:00 UTC, we recommend reviewing them.
Fix Instructions
To resolve the issue caused by the bug, navigate to the affected transformation configuration detail, click the "EDIT ALL QUERIES" button and make a small change (e.g., add a space at the end of one of the queries) to re-save the entire configuration.
⚠️ Please note:
This action will only fix the query formatting issue introduced by the bug. If you made additional changes to the queries in the meantime—such as attempting to fix broken queries manually—you may still need to manually review and correct individual queries. In such cases, it may be easier to roll back to the last known working configuration version instead.
Thank you for your patience while we worked to resolve this.
We are currently experiencing an issue with one of our nodes running jobs.
Our team is actively working to resolve the situation.
We will provide an update once the issue has been addressed.
We're sorry for any inconvenience. Thank you for your understanding.
Update 15:07 23.07.2025 UTC
The affected node has been replaced, jobs will now run normally as expected.
We are investigating issues with slowdown of Storage jobs on Multitenant stacks. There are affected projects which are using Snowflake (Bigquery not affected). Issue was firstly noted on 10.07.2025.
The issue has impact on transformations with bigger backend size than default.
We are preparing fix so the jobs are running as before.
Update 18:45 18.07.2025 UTC - We have released fix, which seems to be working correctly. We will continue monitoring the situation that the performance of transformations is back on track as before.
Update 07:07 19.07.2025 UTC - The root cause is fixed and the transformations on different backend size than default and performance seems to be on track as it was before.
The support widget was unavailable on GCP Pay-As-You-Go projects from July 4th to July 14th 2025. The issue has now been resolved and the support widget is available for these projects once again.
This issue only affected the support and feedback button in the bottom-right corner of the project page. Support requests made from the Projects menu were not affected.
We're investigating an issue with jobs not starting on Azure North Europe stack (https://connection.north-europe.azure.keboola.com/).
Update 08:45 UTC - we've identified the root cause and are working on a fix. The jobs are unblocked for now, but some delays might still occur.
Update 10:22 UTC - The root cause has been fixed.
We are currently investigating a problems with creation of new project for Pay as you go subscription on https://connection.us-east4.gcp.keboola.com/
Update 07:30 UTC - Issue was resolved, pay as you go wizard is now operational.
The announced partial maintenance has just started. The platform has been scaled down and is no longer accepting new jobs. We expect a brief downtime in 15 minutes, around 06:00 UTC. We will update this post with the progress of the partial maintenance.
Update 06:00 UTC: The announced partial maintenance is ongoing, and we are expecting downtime to begin any minute now. We will continue to update this post as the maintenance progresses.
Update 06:10 UTC: The maintenance has been completed, and all services have been scaled back up. The platform is fully operational, and jobs are now being processed as usual. All delayed jobs will be processed shortly. Thank you for your patience.
We are currently experiencing degraded performance on our GCP stacks due to ongoing global GCP infrastructure issues.
The following stacks are affected:
As a result, you may observe random job failures or delays. Jobs may fail with intermittent errors unrelated to the actual job configuration or data.
Our team is actively monitoring the situation and working to restore full functionality. We appreciate your patience and will provide updates as more information becomes available.
[Update – 2025-06-12 19:37 UTC] GCP is in the process of restoring its services. We are observing that our GCP stacks are beginning to operate as expected. API latency has returned to normal levels, and the job queue is starting to clear.
We will continue to monitor the situation closely.
[Update – 2025-06-12 20:25 UTC] The incident has been resolved. All GCP stacks are stable and the platform is fully operational. Thank you for your patience.
]]>We are currently investigating a delay in the job listing updates on https://connection.keboola.com/. New jobs are not appearing in the job overview, and the statuses of existing jobs are not being refreshed.
This issue does not affect job processing or scheduling. Jobs are still running as expected.
We will provide the next update as soon as new information becomes available.
Update 13:15 UTC: We have identified the root cause and resolved the issue. All jobs are now listed correctly, and their statuses are updating immediately.
We are currently experiencing an outage of the Python Workspace on the stack https://connection.eu-central-1.keboola.com. Our team is actively working on resolving the issue. Please monitor this status page for further updates.
Update 2025-06-10 14:45 UTC: Python Workspaces are fully operational after a brief overload-related outage. Apologies for the inconvenience.
Update 2025-06-10 14:55 UTC: Due to the earlier Python Workspace outage, job processing was delayed by approximately 15 minutes. The issue has been stabilized.
Update 2025-06-10 14:59 UTC: The root cause of the incident was a failover on the MySQL RDS database. We apologize for the disruption and appreciate your patience.
]]>We are currently experiencing an issue across all stacks where login to Keboola Connection is not possible. Attempting to access the application results in a 404 Not Found error.
Workaround: After login, if redirected to a 404 URL ending with `/admin`, manually remove the `/admin` part and reload the page.
Our engineering team is actively investigating the root cause. We will share an update as soon as we know more or within the next 30 minutes.
Update 2025-06-10 07:17 UTC: The issue affects only login via Google. Other login methods remain functional. We are continuing to investigate the root cause. Next update will follow within 30 minutes.
Update 2025-06-10 07:42 UTC: The issue has been resolved. Login via Google is now working as expected.
Thank you for your patience.
We would like to inform you about the planned maintenance of all Keboola stacks hosted on Azure.
During the database upgrades there will be a short service outage on all Azure stacks, including all single-tenant stacks and Azure North Europe multi-tenant stack (connection.north-europe.azure.keboola.com). This will take place on Saturday, June 21, 2025 between 05:30 and 06:30 UTC.
Effects of the Maintenance
During the above period, services will be scaled down and the processing of jobs may be delayed. For a very brief period (at around 06:00 UTC) the service will be unavailable for up to 10 minutes and APIs may respond with a 500 error code. After that, all services will scale up and start processing all jobs. No running jobs, data apps, or workspaces will be affected. Delayed scheduled flows and queued jobs will resume after the maintenance is completed.
Detailed Schedule