End of support for long MySQL queries on June 1st

After June 1st 2015 we'll be enforcing strict limits on MySQL queries. All queries longer than 30 minutes (1800 seconds) will be terminated and the transformation will fail. You can now see duration of all queries longer than 2 minutes (120 seconds) in event log of any Transformation job so you can take optimization steps in advance.

We're introducing this limit to prevent errors like forgotten indexes and also to balance load on the shared MySQL Transformation database.

If your queries take significantly longer time than 30 minutes, please consider migrating your project to Redshift.


AWS Connectivity Issues

We're experiencing connectivity issues from AWS to some parts of the outer world (aka the Internet). As our transformation sandbox server (ovh-tapi.keboola.com) is not in AWS you may experience failures when creating sandboxes and credentials. If this is not resolved within AWS shortly, we'll be switching the sandbox server into AWS network. 

We will keep this post updated with the current status.

UPDATE (2:00pm PST): Connectivity seems to work fine now.

Facebook Extractor: invalid account or token

We're changing the way Facebook Extractor reacts to invalid accounts and tokens. Previously all invalid accounts/tokens were automatically disabled (with an error event in Storage) and the extractor continued with extraction. 

As this event was easy to miss and there was no other notification about an invalid token/account, we switched to a more strict behavior. Any invalid account/token will stop the execution of the whole job/orchestration with an explaining message:

You need then to change the token or disable the account manually. 

For any questions and comments do not hesitate to contact us at support@keboola.com.

Provisioning improvements: MySQL DB names and logging

We changed the naming conventions for MySQL provisioning. Instead of tapi_3800_sand and tapi_3800_tran, where 3800 is my token ID, the new database names are sand_232_3800 and tran_3800, where 232 is project ID for easier distinction between projects in your sandbox userspace. Existing credentials keep their database names.

We also added a little more information to events:

AWS Connectivity Issues

AWS recently issued some information about connectivity issue in US-EAST-1 Region, where majority of our infrastructure is located. This may result in 500, 503 and 504 application errors within the infrastructure (our components) or when reaching out to other APIs (extractors). 

We're sorry for any inconvenience. we'll keep this post updated with current status. You can also check the current status at http://status.aws.amazon.com/, row Amazon Elastic Compute Cloud (N. Virginia).

---

9:23 AM PST We are investigating possible Internet connectivity issues in the US-EAST-1 Region.

10:09 AM PST We are continuing to investigate Internet connectivity issues in the US-EAST-1 Region.

11:07 AM PST We are continuing to investigate Internet connectivity issues in the US-EAST-1 Region. This is impacting connectivity between some customer networks and the region. Connectivity within the US-EAST-1 Region is not impacted.

12:23 PM PST We continue to make progress in resolving an issue with an Internet provider outside of our network in the US-EAST-1 Region. Internet connectivity between some customer networks and the region may have been impacted by this issue. We have taken action to address the impact and are seeing recovery for many of the affected instances. Connectivity within the US-EAST-1 Region remains unaffected.

1:44 PM PST We continue to make progress in resolving the Internet connectivity issue between customer networks and affected instances. Connectivity within the US-EAST-1 Region remains unaffected.

2:21 PM PST We experienced an issue with an Internet provider outside of our network that impacted connectivity between some customer networks and the US-EAST-1 Region. Connectivity to instances and services within the region was not affected by the event. The issue has been mitigated, and impacted customers should no longer have problems connecting to instances in the US-EAST-1 Region.

Sunday night queue component issues

We had some issues in one of our components (job handling queue) between 10pm and 10:15pm PST on Sunday night (03:00–03:15 UTC Monday). That had resulted in some failed orchestrations (scheduled at that time or starting it's tasks at that time) . We're planning to upgrade the component in the coming weeks, but if the error occurs again, we'll upgrade it immediately. The upgrade will be accompanied with a short maintenance downtime.


We're sorry for any inconvenience.