We apologize for any inconvenience.
We are aware of a bug in the UI that prevents tables in the input mapping list from being visible. We are in the process of reverting the release causing it and will find a fix afterward.
UPDATE 18:15 CET: The release introducing the bug was reverted so it does not happen again. You may need to reload the browser. The bug was introduced today at 16:04 CET.
2022-07-02 7:45 UTC We are investigating delayed orchestrations on Azure North Europe Keboola Connection stack (https://connection.north-europe.azure.keboola.com). Next update in 30 minutes.
2022-07-02 8:40 UTC After our investigation, problem starts at 6:40 UTC. Orchestrations that should have be scheduled after this time, were delayed in the order of minutes or tens of minutes. Situation returned back to normal at 7:45 UTC and all orchestrations are scheduled properly.
We are sorry for the inconvenience.
2022-06-23 22:30 UTC: We're investigating cases of jobs not processing in the US stack (https://connection.keboola.com/). Next update in 30 minutes.
2022-06-23 07 23:00 UTC: The situation is now under control and our services are running normally. Processing jobs will start to gradually speed up.
We are sorry for the inconvenience.
2022-06-23 16:15 CET We detected a problem with non-function use cases in the Flow Builder tool at https://connection.north-europe.azure.keboola.com between 13:00 CET and 15:30 CET.
The situation is now under control and our service is running normally.
There was an increased error rate for components using keboola.processor-create-manifest in configurations. This error affected only component configurations which were using empty enclosure in CSV Settings.
Affected component may include data sources like AWS S3, FTP, etc.
We are sorry for any inconvenience. Please feel free to restart your jobs manually.
2022-06-20 16:44 UTC - We are investigating increased error rate of some component using keboola.processor-create-manifest.
2022-06-21 07:22 UTC - We identified a root cause and continue working on a fix.
2022-06-21 12:30 UTC - Incident is resolved, last occurrence of the error was at 12:13 UTC.
2022-06-20 06:40 UTC - We are investigating higher rate of jobs ending with internal error Job "XYZ" terminated unexpectedly in connection.eu-central-1.keboola.com stack since June 18th. At the moment the situation is stabilised and we don't see any more internal errors, we are investigating the root causes. Next update when new information will be available.
2022-06-20 14:36 UTC - Incident is resolved, last occurence of the error was at 05:35 UTC. We have found the jobs causing the errors and notified the project owners.
2022-06-16 13:09 UTC - We are investigating high job queue error rates on Azure stacks. Next update when new information will be available or in 30 minutes.
2022-06-16 13:28 UTC - We have found a root cause and implemented a fix. API does not longer returns the errors and platform should stabilized. We continue to monitor the situation. Next update when new information will be available or in 30 minutes.
2022-06-16 13:51 UTC - Incident has been resolved and since 13:28 UTC we don't see an increased api error rate and platform is stable.
2022-06-10 03:40 UTC Today since 2:50 UTC we are experiencing increased Connection api internal error rates as well as increased latency due to an incident in AWS us-east-1 region. The job processing shouldn't be affected, however there might be slow down in the UI responsiveness.
2022-06-10 04:20 UTC - AWS still haven't resolved the incident. Furthermore we are noticing some jobs processing might be affected ending in internal error due to failed connections retries to AWS API.
2022-06-10 08:15 UTC - the issue has been resolved and since 04:30 UTC we don't see an increased api error rate.
2022-06-06 21:00 UTC We are investigating a problem with not processing jobs in AWS US stack https://connection.keboola.com/. Some jobs will probably end up with an internal error.
2022-06-06 21:25 UTC Correction - jobs are only delayed and all jobs ends successfully.
2022-06-06 21:40 UTC Unfortunately the problem still persists, we are trying to find out the cause.
2022-06-06 22:30 UTC We will update this post when we have more information.
2022-06-06 23:25 UTC We have managed to improve the situation. We will continue monitoring the situation and tomorrow we will provide more information about the problem.
2022-06-07 12:40 UTC The situation is now under control and our services are running normally.
We are sorry for the inconvenience.