There has been a bug in both AdWords and Sklik extractors which caused undelivery of data to input tables since Feb 17 13:00 UTC. Actually the data had been saved to wrong bucket: in.c-ex-adwords instead of in.c-ex-adwords-[config] (similarly for Sklik). You can get the data there if you have only one configuration for the extractor. Otherwise data for more configurations will be mixed in this one bucket and so it will be safer to run the extractions again. Now the problem is fixed and should not occur again. We are sorry for any inconvenience.
AWS recently issued some information about connectivity issue in US-EAST-1 Region, where majority of our infrastructure is located. This may result in 500, 503 and 504 application errors within the infrastructure (our components) or when reaching out to other APIs (extractors).
We're sorry for any inconvenience. we'll keep this post updated with current status. You can also check the current status at http://status.aws.amazon.com/, row Amazon Elastic Compute Cloud (N. Virginia).
9:23 AM PST We are investigating possible Internet connectivity issues in the US-EAST-1 Region.
10:09 AM PST We are continuing to investigate Internet connectivity issues in the US-EAST-1 Region.
There is lot of sheduled jobs waiting in orchestrations queue, we are adding more workers to resolve this issue.
UPDATE 11:00 CET: Issue is resolved. All orchestrations should start immediately. Sorry for inconvience.
There was a bug in import to Storage API from transformations, it was present only in the following conditions:
- Redshift transformation with output to Redshift table
- Output was incremental
- Previously null value was changed to non null value
Null values were never updated in Storage API table. Bug is fixed now but the affected output tables have to recreated. The bug was present in Storage since the roll out of Redshift support.
We had some issues in one of our components (job handling queue) between 10pm and 10:15pm PST on Sunday night (03:00–03:15 UTC Monday). That had resulted in some failed orchestrations (scheduled at that time or starting it's tasks at that time) . We're planning to upgrade the component in the coming weeks, but if the error occurs again, we'll upgrade it immediately. The upgrade will be accompanied with a short maintenance downtime.
We're sorry for any inconvenience.
The performance of Storage API was decreased between 11:30 - 12:30 CET. It affected also response time of Keboola Connection user interface.
The issue is resolved now.
Several projects may have experienced errors in extractor jobs processing. Database, Zendesk, Google drive and some other extractors were affected. Issue is resolved now and we are investigation the cause of the issue.
We have restarted failed or waiting jobs. We're sorry for any inconvenience!
Paymo, Facebook, Facebook Ads and Salesforce extractors were returning curl(60) error in orchestrations from 3 PM - 11PM PST November 6th. Error was caused by invalid SSL certificates.
To finish your tasks, just re-run your orchestrators. We're sorry for any inconvenience!
Some files were not accessible between 7 PM - 10 PM PST November 4. It caused failures of loads to storage API tables and thus also orchestration failures.
Example of failed orchestration:
It was caused by failed Elasticsearch cluster node. We are still investigating the cause of this issue. However, our whole infrastructure works smoothly at this time. To finish your tasks, just re-run your orchestrators. We're sorry for any inconvenience!
Synchronous imports into Storage API were broken from Jul 13 7:45 PM - 10:00 PM PST. Scheduled orchestrations were affected by this issue.
We apologize for any inconveniences.