We would like to inform you that our Buffer API is being deprecated as part of our commitment to providing you with the best tools and services. Effective 1.3.2025, the Buffer API will no longer be available.
To ensure a seamless transition, we are introducing Data Streams, full replacement and powerful and improved service designed to enhance your data integration experience. You can learn more about Data Streams and its benefits HERE.
We’ve created a migration tool to help you move current Buffer API endpoints to Data Streams quickly and easily. Our support team is also here to provide guidance and support throughout the process.
You can read more about Data Streams and how to set it up HERE.
Our support team is here to help you every step of the way. If you have any questions or concerns about the migration, feel free to reach out to us.
Thank you for your understanding and for being a valued part of the Keboola community. We’re excited for you to experience the benefits of Data Streams!
We are experiencing problem with jobs data replication on AWS EU stack (https://connection.eu-central-1.keboola.com). Jobs are being triggered and run properly but they don't appear in jobs list across whole UI.
Investigation is under way. We will post any updates available.
We are sorry for any inconvenience this may have caused and thank you for your patience.
2024-12-18 14:35 UTC: We have identified and fixed the issue with the component responsible for job data replication. New jobs are now displaying correctly in the lists, and all jobs that ran during the outage period have been successfully backfilled.
]]>We are registering Buffer API outage on AWS US stack. The API is currently not receiving any requests which also affects other platform features, like AI suggestions.
Investigation is under way. We will post any updates available.
We are sorry for any inconvenience this may have caused and thank you for your patience.
2024-12-18 15:15 UTC: We have discovered that the data storage behind the Buffer API was corrupted. While the service is now fully operational, it had not been receiving any incoming requests since 2024-12-17 23:00 UTC. Additionally, Buffer endpoint configurations were lost due to the data corruption. All customers with configured Buffer API integrations will be contacted by their account executives to explain the situation in more detail.
]]>We are investigating failing Storage jobs with the error: "Insufficient privileges to operate on account 'KEBOOLA'" on the AWS US East stack (connection.keboola.com).
Currently, it is not possible to create Snowflake workspaces.
We apologize for the inconvenience and will provide an update in 30 minutes.
UPDATE 2024-12-15 15:33 UTC
We have discovered the root cause and are working on a solution. The next update will be provided as soon as it is resolved or in 30 minutes.
UPDATE 2024-12-15 15:55 UTC
The issue has been identified and fixed. All projects are now operating normally.
We are sorry for any inconvenience this may have caused and thank you for your patience.
We noticed that from about 12:30 CET they stopped working Data Apps on GCP stacks:
We apologize for the inconvenience; we are currently investigating the issue and will provide an update in 30 minutes.
UPDATE 2024-12-11 13:01 UTC: We have identified the root cause, which only affected configurations with authorization enabled or a secret value set. This caused failures during the decryption process. We have fixed the permissions required for decryption in the runtime job, and all systems are now operating normally. We sincerely apologize for the inconvenience.
2024-11-20 08:00 UTC: Since 04:00 AM, we have been experiencing a high error rate on connection.us-east4.gcp.keboola.com affecting Snowflake operations. This includes Snowflake transformations, storage import/export jobs, and other processes, resulting in intermittent application errors.
2024-11-20 09:40 UTC: We're still waiting for confirmation from Snowflake.
2024-11-20 12:00 UTC: We're still waiting for confirmation from Snowflake.
2024-11-20 14:00 UTC: We're still waiting for confirmation from Snowflake.
2024-11-20 15:00 UTC: We haven't seen an error since 11:20 UTC and Snowflake have confirmed a fix on their side. All operations are back to normal. This is the last update.
We're sorry for the inconvenience and thanks for your patience.
The announced partial maintenance of connection.keboola.com and connection.eu-central-1.keboola.com will start in one hour at 07:00 UTC. We will update this post with the progress of the partial maintenance for each stack.
Update 07:00 UTC: The scheduled partial maintenance for connection.keboola.com has begun.
Update 07:15 UTC: The maintenance of connection.keboola.com has been completed, and all services have been scaled back up. The platform is fully operational, and jobs are now being processed as usual. All delayed jobs will be processed shortly. Thank you for your patience. Maintenance of connection.eu-central-1.keboola.com will start at 08:00 UTC.
Update 08:00 UTC: The scheduled partial maintenance for connection.eu-central-1.keboola.com has begun.
Update 08:15 UTC: The maintenance of connection.eu-central-1.keboola.com has been completed, and all services have been scaled back up. The platform is fully operational, and jobs are now being processed as usual. All delayed jobs will be processed shortly. Thank you for your patience.]]>13.11.2024 2:16 UTC - Due to an incident on Snowflake, the jobs on Azure North Europe stack (https://connection.north-europe.azure.keboola.com/) are executing very slowly. So far, there doesn't seem to be any errors. The symptoms include jobs running longer than usual, workspaces not starting, table preview not loading data.
The cause of the issue on Snowflake side is not yet known - feel free to view the details at https://status.snowflake.com/incidents/kjd4lpptzmkh.
Update 2:48 UTC - Snowflake incident is still in progress.
Update 3:10 UTC - Snowflake incident is still in progress.
Update 4:35 UTC - The cause of the incident has been identified as an outage of 3rd party provider.
Update 5:59 UTC - The incident has been identified as Azure connectivity issue and published here https://azure.status.microsoft/en-us/status. There is no ETA available.
Update 7:33 UTC - While the Azure incident is still not resolved, the Snowflake performance is now back to normal. The processing of jobs and other operations of Keboola platform is also back to normal. We will continue to monitor the situation closely.
Update 8:16 UTC - There are still delays in processing jobs and finishing flows. We're working on a fix.
Update 9:28 UTC - All jobs should now be processing normally. The Azure incident is still not resolved however, so errors may still re-appear
Update 10:50 UTC - All operations are back to normal.
We are sorry for this inconvenience and thanks for your patience.
]]>On November 12, between 8:00 and 19:00 UTC we have experienced flow failing with error "Cannot process orchestration configuration: Unrecognized option 'isFake'" . It was due to an recent update on the UI, we have reverted to the previous working version and it working as expected. The error affected only flows created or edited within the above timeframe. We have fixed the root cause and fixed most of the affected flow configurations.
However, if you still experience the problem (since the unrecognized option was stored in the configuration of the flow) the option may have to removed manually. To do so, please go to "debug mode" mode of the Flow configuration (on the flow detail page click on three dots -> debug mode) and remove any "isFake": true/false property from the flow configuration. If you are unsure about the process, please contact our support.
12.11.2024 10:30 UTC - Since approximately 10:00 UTC, there is an issue running Jobs on Azure North Europe stack https://connection.north-europe.azure.keboola.com/. Jobs cannot be run (including scheduled ones). We have identified the root cause and working on a fix.
Other stacks are not affected.
Update 10:57 UTC - The root cause is resolved now and new jobs are now starting properly. However there might be flows stuck in processing or terminating state not able to finish. We're working on a fix for this.
Update 11:27 UTC - The fix will be deployed within the next 30 minutes.
Update 12:10 UTC - We are sorry for the delay, the fix will be released any minute now.
Update 12:21 UTC - All stuck jobs are now processed and all flows work as expected.
We are sorry for this inconvenience and thank you for your patience.
]]>On November 8th, 2024, between 8:17 AM and 8:28 AM UTC, we experienced a brief outage in our notification service on all GCP stacks due to a deployment misconfiguration. The issue was identified and promptly resolved, and the service has been fully operational since 8:28 AM UTC. During this period, no notifications were sent.
We apologize for any inconvenience this may have caused and appreciate your understanding.
]]>The announced partial maintenance has just started. The platform has been scaled down and is no longer accepting new jobs. We expect a brief downtime in 30 minutes, around 8:00 UTC for GCP europe-west3, and 8:30 UTC for GCP us-east4 respectively. We will update this post with the progress of the partial maintenance.
2024-11-02 8:00 UTC: The announced partial maintenance of europe-west3 stack is ongoing, and we are expecting downtime to begin any minute now. We will continue to update this post as the maintenance progresses.
2024-11-02 8:13 UTC: The maintenance of europe-west3 stack has been completed, and all services have been scaled back up. The platform is fully operational, and jobs are now being processed as usual. All delayed jobs will be processed shortly. Thank you for your patience.
2024-11-02 8:30 UTC: The announced partial maintenance of us-east4 stack is ongoing, and we are expecting downtime to begin any minute now. We will continue to update this post as the maintenance progresses.
2024-11-02 8:39 UTC: The maintenance of us-east4 stack has been completed, and all services have been scaled back up. The platform is fully operational, and jobs are now being processed as usual. All delayed jobs will be processed shortly.
🎉 This concludes GCP maintenance! Thank you for your patience.
]]>We are investigating job start delays on the https://connection.us-east4.gcp.keboola.com/ stack. We will provide updates as new information becomes available.
Update 06:45 UTC: The platform has stabilized, and the job backlog has been cleared. Thank you for your patience, and we apologize for any inconvenience caused.
We are experiencing problems on our GCP EU stack (https://connection.europe-west3.gcp.keboola.com/). We are deeply sorry for the inconvenience this may cause. In the user interface, you can issue an error alert or task slowdown processing jobs. Next update in 30 minutes.
Oct 28th 21:11 UTC: We're still investigating scheduling issues within our underlying infrastructure. Next update in 30 minutes.
Oct 28th 21:18 UTC: The issue has been resolved, and job listing should now work as expected. Thank you for your patience, and sorry for any inconvenience.
The announced partial maintenance has just started. The platform has been scaled down and is no longer accepting new jobs. We expect a brief downtime in 30 minutes, around 12:00 UTC. We will update this post with the progress of the partial maintenance.
Update 12:00 UTC: The announced partial maintenance is ongoing, and we are expecting downtime to begin any minute now. We will continue to update this post as the maintenance progresses.
Update 12:22 UTC: The maintenance has been completed, and all services have been scaled back up. The platform is fully operational, and jobs are now being processed as usual. All delayed jobs will be processed shortly. Thank you for your patience.
2024-10-19 10:26 UTC We have noticed a slowdown in the processing of jobs on the https://connection.eu-central-1.keboola.com stack, but the jobs shouldn't end with an error.
Update 2024-10-19 10:57 UTC The problem has been identified and solved, the platform should be stable again.
We apologize for the inconvenience.
]]>We would like to inform you about the planned maintenance of Keboola stacks hosted on AWS, Azure, and GCP.
This maintenance is necessary to keep our services running smoothly and securely. Please note the following schedules and the stacks affected.
During database upgrades there will be a short service disruption on all Azure stacks, including all single-tenant stacks and Azure North Europe multi-tenant stack (connection.north-europe.azure.keboola.com). This will take place on Saturday, October 26, 2024 between 11:30 and 12:30 UTC.
Effects of the Maintenance
During the above period, services will be scaled down and the processing of jobs may be delayed. For a very brief period (at around 12:00 UTC) the service will be unavailable for up to 10 minutes and APIs may respond with a 500 error code. After that, all services will scale up and start processing all jobs. No running jobs, data apps, or workspaces will be affected. Delayed scheduled flows and queued jobs will resume after the maintenance is completed.
Detailed Schedule
11:30–12:00 UTC: processing of new jobs stops.
12:00–12:15 UTC: service disruption
12:15 UTC: processing of jobs starts.
During database upgrades there will be a short service disruption on both GCP multi-tenant stacks. Here is the schedule.
GCP eu-west3 (connection.europe-west3.gcp.keboola.com): Saturday, November 2, 2024, at 08:00 UTC.
GCP us-east4 (connection.us-east4.gcp.keboola.com): Saturday, November 2, 2024, at 08:30 UTC.
Effect of the Maintenance
During the above periods, services will be scaled down and the processing of jobs may be delayed. For very brief periods (at around 8:00 UTC and 8:30 UTC, respectively) the service will be unavailable for up to five minutes and APIs may respond with a 500 error code. After that, all services will scale up and start processing all jobs. No running jobs, data apps, or workspaces will be affected. Delayed scheduled flows and queued jobs will resume after the maintenance is completed.
Detailed Schedule for eu-west3
7:30–8:00 UTC: processing of new jobs stops.
8:00–8:10 UTC: service disruption
8:15 UTC: processing of jobs starts.
Detailed Schedule for us-east4
8:00–8:30 UTC: processing of new jobs stops.
8:30–8:40 UTC: service disruption
8:45 UTC: processing of jobs starts.
During database upgrades there will be a limited service disruption on our AWS multi-tenant stacks. Here is the schedule.
AWS us-east-1 (connection.keboola.com): Saturday, November 16, 2024, at 07:00 UTC.
AWS eu-central-1 (connection.eu-central-1.keboola.com): Saturday, November 16, 2024, at 08:00 UTC.
Effect of the Maintenance
The maintenance is expected to last no longer than 15 minutes, during which jobs may be delayed. While you will be able to log into the Keboola platform, starting new jobs will not be possible during the maintenance. Jobs already running will not be canceled—only delayed. Running data apps or workspaces will not be affected. Scheduled jobs will automatically start after the maintenance is completed.
]]>2024-10-14 16:30 UTC We are observing a small number of instances where errors occur during job creation, and you may encounter the error message: “Decryption failed: Deciphering failed.” As a result, orchestrations may become stuck in the terminate state. If you experience this issue, please contact our support team.
We are actively investigating the situation and will provide an update later this evening.
2024-10-14 21:40 UTC We have successfully identified the affected orchestrations and deployed a fix that automatically terminates them. We now consider this incident resolved. We sincerely apologize once again for the inconvenience caused.
]]>2024-10-07 18:10 UTC - We are investigating issues on north-europe.azure.keboola.com Stack
2024-10-07 18:30 UTC - Issues are caused by Azure networking problems, we are monitoring situation.
We are experiencing problems on our AWS EU stack (https://connection.eu-central-1.keboola.com/). We are deeply sorry for the inconvenience this may cause. In the user interface, you can issue an error alert or task slowdown processing jobs. Next update 30 minut.
Sep 26 08:34 UTC: We identified and fixed an overload on one of our Kubernetes node. All systems are now running normally. We’ve implemented measures to prevent recurrence.
Thank you for your patience.
2024-09-23 13:48 UTC - We are currently investigating potential modifications to the primary key for the FTP and S3 extractors that occurred around September 13th, 2024. The issue has already been reverted, and we are conducting an analysis. We will provide more information as soon as we have further details.
UPDATE 2024-09-23 16:36 UTC - Our analysis confirms that no projects on single-tenant stacks were affected by the issue. We are continuing with the analysis of multi-tenant stack projects and will provide more information as soon as we have further details.
UPDATE 2024-09-25 9:30 UTC - Our analysis has been completed, and we now have a list of affected configurations. The issue with potential primary key modifications may have impacted not only FTP and S3 extractors but also other components using the Processor Create Manifest. We would like to highlight that not all configurations with this processor were affected.
In cases where a configuration experienced a primary key modification, the key was automatically restored after the next run. However, a small number of configurations did not revert to their original primary key due to duplicate records in the table.
These cases are limited, and the clients affected by this issue will be contacted individually by our support team today with further steps and recommendations.
If you have any questions or concerns, please reach out to our support team.
Sep 20 15:15 UTC: We are experiencing degraded performance on our GCP EU stack (https://connection.europe-west3.gcp.keboola.com/). We are deeply sorry for the inconvenience this may cause and appreciate your patience as we work through it. We will provide further updates as soon as we have more information.
Sep 22 18:53 UTC update:We have gained additional understanding of the performance degradation. The root cause seems to be in intermittent slowdown of query execution on Snowflake. While the execution of single query is not delayed significantly, the delays do accumulate to noticeable slowdowns of minutes on transformations consisting of multiple queries and even more for entire flows. We're in touch with Snowflake support in the process of uncovering all the technical details.
The symptoms of the performance degradation include longer running times of jobs - especially on Snowflake transformations, but Data Source jobs and Data Destination jobs are also affected, because they load/unload data from a Snowflake database. The performance degradation occurrence is random and somewhat time dependent so not all flows are affected in the same manner. The performance degradation does not cause any errors.
Sep 24 07:10 UTC update: We have now confirmed the root cause to be in slower execution of Snowflake queries at certain times. We have implemented a temporary resource increase to improve the situation. This means that you should see improved job run times. We're still working with Snowflake support on the solution.
Oct 1 7:54 UTC update: We are still working on the resolution together with Snowflake support. The temporary resource increase is still in place. This means that that the situation is contained and overall stack performance should be acceptable, but not perfect.
At this moment we don't have a solution ready, nor an ETA for one. Thank you for your understanding and patience. We will provide further updates as soon as we have more information.
]]>2024-09-05 20:30 UTC - We are investigating issues on connection.north-europe.azure.keboola.com
2024-09-05 20:40 UTC - Issue is now resolved, we are monitoring situation.
We're investigating issues across all Azure stacks. Next update in 15 minutes or when new information is available.
UPDATE 14:35 CEST: The issue seems to be resolved, we're still evaluating the impact. Next update in 30 minutes or when new information is available.
UPDATE 14:55 CEST: The outage was caused by routine maintenace of databases in Auzure. All running jobs affected during this outage should restart automatically. Some of the affected Storage jobs may be executed twice and in very rare cases when using incremental load without a primary key this could lead to data duplication.
We're sorry for this inconvenience, we will be taking measures to decrease impact of future maintenance events.
For any additional questions, please contact support.
Between 12:05 pm CEST and 13:45 pm CEST Snowflake transformation could return an empty string for TIMESTAMP columns during a query execution or output mapping. The bug is fixed now.
This affected only Keboola provisioned Snowflake, BYODB databases were not affected.
We're sorry for this inconvenience, if you have any questions please contact our support.
]]>2024-08-27 06:25 - We are investigating an issue with restoration of Python workspaces
2024-08-27 07:50 - Issue is resolved, python workspaces are now working normally.
We are apologize for the inconvenience.
A major difference is that the new endpoint returns results from a secondary database, which is synchronized with a slight delay.
This means a new job may not be shown right away after it's created and job detail may contain slightly outdated data if it was updated recently. The delay should be couple seconds at max during normal operation.
Other difference is the new endpoint will return maximum 500 items per page.
See the Job Queue API documentation for more details.
2024-08-19 08:05 - We are investigating an issue with synchronous action on connection.north-europe.azure.keboola.com stack
2024-08-19 08:30 - We are still investigating the issue, problem is on all Azure instances.
2024-08-19 09:00 - Issue is resolved, synchronous actions are operational on all azure stacks.
2024-08-13 11:02 UTC - We are experiencing a problem with the project consumption dashboard. Displaying the detail page ends with an error and the dashboard is not available.
2024-08-13 11:32 UTC - Unfortunately, the problem still persists, and we have not been able to find the root cause. We are still actively solving the problem.
2024-08-13 11:51 UTC - The problem has been fixed and the consumption dashboard is available again.
We are apologize for the inconvenience.
]]>2024-08-13 09:06 UTC - We are investigating an issue where jobs are delayed, the problem has so far only affected the AWS EU stack https://connection.eu-central-1.keboola.com.
2024-08-13 09:35 UTC - The problem still persists, we have not yet been able to determine the cause of the problem.
2024-08-13 09:48 UTC - We have managed to discover and fix the problem, the scheduled jobs are currently delayed, we are now stabilizing the problem and will give next update when the stack is stable.
2024-08-13 10:00 UTC - The problem is solved completely and the stack is fully stable. We are sorry for the inconvenience.