Planned partial maintenance on Saturday, June 21, 2025 for all Azure stacks

The announced partial maintenance has just started. The platform has been scaled down and is no longer accepting new jobs. We expect a brief downtime in 15 minutes, around 06:00 UTC. We will update this post with the progress of the partial maintenance.

Update 06:00 UTC: The announced partial maintenance is ongoing, and we are expecting downtime to begin any minute now. We will continue to update this post as the maintenance progresses.

Update 06:10 UTC: The maintenance has been completed, and all services have been scaled back up. The platform is fully operational, and jobs are now being processed as usual. All delayed jobs will be processed shortly. Thank you for your patience.

Degraded Performance on GCP Stacks

We are currently experiencing degraded performance on our GCP stacks due to ongoing global GCP infrastructure issues

The following stacks are affected:

As a result, you may observe random job failures or delays. Jobs may fail with intermittent errors unrelated to the actual job configuration or data.

Our team is actively monitoring the situation and working to restore full functionality. We appreciate your patience and will provide updates as more information becomes available.

[Update – 2025-06-12 19:37 UTC] GCP is in the process of restoring its services. We are observing that our GCP stacks are beginning to operate as expected. API latency has returned to normal levels, and the job queue is starting to clear.

We will continue to monitor the situation closely.

[Update – 2025-06-12 20:25 UTC] The incident has been resolved. All GCP stacks are stable and the platform is fully operational. Thank you for your patience.

Job Listing Not Updating on connection.keboola.com

We are currently investigating a delay in the job listing updates on https://connection.keboola.com/. New jobs are not appearing in the job overview, and the statuses of existing jobs are not being refreshed.

This issue does not affect job processing or scheduling. Jobs are still running as expected.

We will provide the next update as soon as new information becomes available.

Update 13:15 UTC:
We have identified the root cause and resolved the issue. All jobs are now listed correctly, and their statuses are updating immediately.


Python Workspace Outage on AWS EU

We are currently experiencing an outage of the Python Workspace on the stack https://connection.eu-central-1.keboola.com. Our team is actively working on resolving the issue. Please monitor this status page for further updates.

Update 2025-06-10 14:45 UTC: Python Workspaces are fully operational after a brief overload-related outage. Apologies for the inconvenience.

Update 2025-06-10 14:55 UTC: Due to the earlier Python Workspace outage, job processing was delayed by approximately 15 minutes. The issue has been stabilized.

Update 2025-06-10 14:59 UTC: The root cause of the incident was a failover on the MySQL RDS database. We apologize for the disruption and appreciate your patience.

Google Login to Keboola Not Working

We are currently experiencing an issue across all stacks where login to Keboola Connection is not possible. Attempting to access the application results in a 404 Not Found error.

Workaround: After login, if redirected to a 404 URL ending with `/admin`, manually remove the `/admin` part and reload the page.

Our engineering team is actively investigating the root cause. We will share an update as soon as we know more or within the next 30 minutes.

Update 2025-06-10 07:17 UTC: The issue affects only login via Google. Other login methods remain functional. We are continuing to investigate the root cause. Next update will follow within 30 minutes.

Update 2025-06-10 07:42 UTC: The issue has been resolved. Login via Google is now working as expected.

Thank you for your patience.

Scheduled Partial Maintenance of all Azure stacks – June 21, 2025

We would like to inform you about the planned maintenance of all Keboola stacks hosted on Azure.

During the database upgrades there will be a short service outage on all Azure stacks, including all single-tenant stacks and Azure North Europe multi-tenant stack (connection.north-europe.azure.keboola.com). This will take place on Saturday, June 21, 2025 between 05:30 and 06:30 UTC.

Effects of the Maintenance

During the above period, services will be scaled down and the processing of jobs may be delayed. For a very brief period (at around 06:00 UTC) the service will be unavailable for up to 10 minutes and APIs may respond with a 500 error code. After that, all services will scale up and start processing all jobs. No running jobs, data apps, or workspaces will be affected. Delayed scheduled flows and queued jobs will resume after the maintenance is completed.

Detailed Schedule

  • 05:30–06:00 UTC: processing of new jobs stops.
  • 06:00–06:15 UTC: service enhancement period.
  • 06:15 UTC: processing of jobs resumes.


Flows triggers are not possible to set trigger

We are investigating issue that flows generate new IDs which are not possible to set trigger with. It's currently affecting all stacks.

We will update this post as new information becomes available. If you have any questions or concerns, please reach out to our support team.

Update June 05 17:50 UTC We released fix which should solved the issue visually with 201 status response but still the schedule is not available in UI. We will investigate further where is the problem.

Update June 05 18:15 UTC We see things operating as usually. All flows configurations between 17:50 UTC and 18:15 UTC won't be properlly working with triggers. 

Also all flows created from ~June 03 15:00 UTC till June 05 17:50 UTC which were created with ULID identifiers remain unusable using table triggers. We will make sure that these will be able to be scheduled with tiggers subsequently. Current approach is to delete your flows with ULID identifiers and create new ones with integer identifiers.

We are sorry for this inconvenience.

Planned partial maintenance on Saturday, May 24, 2025 for all GCP stacks

The announced partial maintenance has just started. The platform has been scaled down and is no longer accepting new jobs. We expect a brief downtime in 15 minutes, around 06:00 UTC. We will update this post with the progress of the partial maintenance.

Update 06:00 UTC: The announced partial maintenance is ongoing, and we are expecting downtime to begin any minute now. We will continue to update this post as the maintenance progresses.

Update 06:11 UTC: The maintenance has been completed, and all services have been scaled back up. The platform is fully operational, and jobs are now being processed as usual. All delayed jobs will be processed shortly. Thank you for your patience.

Azure Stacks: Workspace, Data Apps and Jobs Startup Failures

14:00 UTC: All Azure stacks are currently experiencing issues starting Python Workspaces, Data Apps and Jobs. The root cause appears to be a failure to pull the required Docker images.

On startup you may see a long start which ends with an internal error. We apologize for the situation.

UPDATE 14:52 UTCWe’ve applied a fix and all systems are now operating normally. We apologize for the disruption.

Degraded performance of Azure Storage accounts in all Azure West Europe stacks

Since May 13, 20:00 UTC, we have been seeing intermittent delays when performing service management operations on Azure Storage accounts hosted in the West Europe region. Storage availability and data-processing workflows remain fully operational; however, you may notice jobs delays.

Azure’s latest update (not publicly available):

Current Status: Our monitoring shows that our mitigation strategy has worked and less than 5 % of the traffic is impacted at this stage; most customer impact should be mitigated at this stage. We continue to monitor our infrastructure and expect to see the delays decrease over the next few hours.

We will update this post as new information becomes available. If you have any questions or concerns, please reach out to our support team.

Update May 16, 08:00 UTC: Azure's latest update (not publicly available):

Service restored, and customer impact mitigated.

We can confirm the issue is resolved with our findings.

We are sorry for this inconvenience.