AdWords and Sklik extractors issues

There has been a bug in both AdWords and Sklik extractors which caused undelivery of data to input tables since Feb 17 13:00 UTC. Actually the data had been saved to wrong bucket: in.c-ex-adwords instead of in.c-ex-adwords-[config] (similarly for Sklik). You can get the data there if you have only one configuration for the extractor. Otherwise data for more configurations will be mixed in this one bucket and so it will be safer to run the extractions again. Now the problem is fixed and should not occur again. We are sorry for any inconvenience.

AWS Connectivity Issues

AWS recently issued some information about connectivity issue in US-EAST-1 Region, where majority of our infrastructure is located. This may result in 500, 503 and 504 application errors within the infrastructure (our components) or when reaching out to other APIs (extractors). 

We're sorry for any inconvenience. we'll keep this post updated with current status. You can also check the current status at http://status.aws.amazon.com/, row Amazon Elastic Compute Cloud (N. Virginia).

---

9:23 AM PST We are investigating possible Internet connectivity issues in the US-EAST-1 Region.

10:09 AM PST We are continuing to investigate Internet connectivity issues in the US-EAST-1 Region.

11:07 AM PST We are continuing to investigate Internet connectivity issues in the US-EAST-1 Region. This is impacting connectivity between some customer networks and the region. Connectivity within the US-EAST-1 Region is not impacted.

12:23 PM PST We continue to make progress in resolving an issue with an Internet provider outside of our network in the US-EAST-1 Region. Internet connectivity between some customer networks and the region may have been impacted by this issue. We have taken action to address the impact and are seeing recovery for many of the affected instances. Connectivity within the US-EAST-1 Region remains unaffected.

1:44 PM PST We continue to make progress in resolving the Internet connectivity issue between customer networks and affected instances. Connectivity within the US-EAST-1 Region remains unaffected.

2:21 PM PST We experienced an issue with an Internet provider outside of our network that impacted connectivity between some customer networks and the US-EAST-1 Region. Connectivity to instances and services within the region was not affected by the event. The issue has been mitigated, and impacted customers should no longer have problems connecting to instances in the US-EAST-1 Region.

Transformations Redshift output mapping bug

There was a bug in import to Storage API from transformations, it was present only in the following conditions:

  • Redshift transformation with output to Redshift table
  • Output was incremental
  • Previously null value was changed to non null value

Null values were never updated in Storage API table. Bug is fixed now but the affected output tables have to recreated. The bug was present in Storage since the roll out of Redshift support.

Sunday night queue component issues

We had some issues in one of our components (job handling queue) between 10pm and 10:15pm PST on Sunday night (03:00–03:15 UTC Monday). That had resulted in some failed orchestrations (scheduled at that time or starting it's tasks at that time) . We're planning to upgrade the component in the coming weeks, but if the error occurs again, we'll upgrade it immediately. The upgrade will be accompanied with a short maintenance downtime.


We're sorry for any inconvenience.

Jobs failures

Several projects may have experienced errors in extractor jobs processing. Database, Zendesk, Google drive and some other extractors were affected. Issue is resolved now and we are investigation the cause of the issue.

We have restarted failed or waiting jobs. We're sorry for any inconvenience! 

Extractors failures

Paymo, Facebook, Facebook Ads and Salesforce extractors were returning curl(60) error in orchestrations from 3 PM - 11PM PST November 6th. Error was caused by invalid SSL certificates.

To finish your tasks, just re-run your orchestrators. We're sorry for any inconvenience! 

Inaccessible Storage API files

Some files were not accessible between 7 PM - 10 PM PST November 4. It caused failures of loads to storage API tables and thus also orchestration failures.

Example of failed orchestration:


It was caused by failed Elasticsearch cluster node. We are still investigating the cause of this issue. However, our whole infrastructure works smoothly at this time. To finish your tasks, just re-run your orchestrators. We're sorry for any inconvenience!