Some orchestrations have failed around 09:46 AM CET, it was caused by SSL issues in Storage API.
Problem is fixed now, failed orchestrations are restarted. Sorry for inconvenience.
Some orchestrations have failed around 09:46 AM CET, it was caused by SSL issues in Storage API.
Problem is fixed now, failed orchestrations are restarted. Sorry for inconvenience.
We changed the naming conventions for MySQL provisioning. Instead of tapi_3800_sand and tapi_3800_tran, where 3800 is my token ID, the new database names are sand_232_3800 and tran_3800, where 232 is project ID for easier distinction between projects in your sandbox userspace. Existing credentials keep their database names.
We also added a little more information to events:
For all columns without a defined datatype in a Redshift transformation we added a default LZO compression. This will lead to a marginally slower transformation times but significantly less memory and disk space usage.
You can override this behavior by defining a datatype without a compression.
A few weeks ago, we silently launched ability to create Storage API Aliases by using your own SQL code. These Alias Tables with custom SQL can be created with Redshift backend only.
Create New Alias Table:
Define your own SQL code:
Why?
Alias Tables can help you structure your data. Imagine it as a "Transform on Demand" - everything is happening on-the-fly (aka real-time). Say we have business transactions in table "data". This is an example how to define "derived" table with weekly sum of all transactions, that can't be joined with our Customer (alarm, wrong data!! :-)
Raw Result of this simple alias table:
"year","week","total"
"2014","1","1314788.27"
"2014","2","3719694.16"
"2014","3","3907852.92"
"2014","4","4013945.26"
"2014","5","3884234.84"
Tables can be aliased between out and in stage in both ways from now. It is no longer limited to only in -> out direction.
Wait... what is R?
We've developed the backend and UI for transformations in R. All R transformations run in our public Docker image and have 1 GB memory allowance (will increase in near future).
There's a guide how to develop and test your R scripts locally and a list of best practices and limitations.
Start typing you R scripts now!
There has been a bug in both AdWords and Sklik extractors which caused undelivery of data to input tables since Feb 17 13:00 UTC. Actually the data had been saved to wrong bucket: in.c-ex-adwords instead of in.c-ex-adwords-[config] (similarly for Sklik). You can get the data there if you have only one configuration for the extractor. Otherwise data for more configurations will be mixed in this one bucket and so it will be safer to run the extractions again. Now the problem is fixed and should not occur again. We are sorry for any inconvenience.
AWS recently issued some information about connectivity issue in US-EAST-1 Region, where majority of our infrastructure is located. This may result in 500, 503 and 504 application errors within the infrastructure (our components) or when reaching out to other APIs (extractors).
We're sorry for any inconvenience. we'll keep this post updated with current status. You can also check the current status at http://status.aws.amazon.com/, row Amazon Elastic Compute Cloud (N. Virginia).
---
9:23 AM PST We are investigating possible Internet connectivity issues in the US-EAST-1 Region.
10:09 AM PST We are continuing to investigate Internet connectivity issues in the US-EAST-1 Region.
Storage Api Console now proceeds all non redshift tables exports ansychronously. Synchronous table export is deprecated except for redshift backend tables.