You can define Distribution Style for Redshift input mapping.
Here are some hints, how to design tables: http://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-best-dist-key.html
You can define Distribution Style for Redshift input mapping.
Here are some hints, how to design tables: http://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-best-dist-key.html
All Redshift transformations can now be easily integrated into SQLdep. A single click uploads your current transformation (and all dependecies) to SQLdep and shows you a big picture of what’s happening in the transformation. You can look at it from the table, column or query perspective.
Note: this is only available for Redshift transformations. For more information about our Redshift backend please contact support@keboola.com.
We have deployed improvements to Mandatory User Filters which need a change in writers configuration. Your configurations will be migrated automatically but here is a list of changes:
Table filters contained columns name, attribute, element, operator, uri and now contains only name, attribute, operator, value (element has been renamed to value and uri moved tofilters_projects table)
Table filters_projects contained columns filterName, pid and now contains uri, filter, pid
Table filters_users contained columns filterName, userEmail and now contains id, filter, email (id is generated by Writer and is unique for each combination of filter name and email)
The RestBox tool has been updated with two new plugins you can find on the File modifications tab.
Filename will add a column to your StorageAPI table, which contains the source filename/URL
DownloadTime adds a column with the date/time when the file has been processed by RestBox.
Upload of a table (API call /upload-table) now produces two separate jobs in queue, one for model update and one for data load. You can even call those jobs separately via API calls/update-ldm and /load-data, see documentation
Update: API call /load-data in addition accepts list of multiple tables to upload
You can connect GoodData Writer to existing GoodData Project.
GoodData Writer is now able to optimize SLI hashes.
On the new sandbox credentials page you can now share your sandbox database to another user in the same project. The user, you’re sharing your sandbox with, needs to have active sandbox credentials, and your sandbox database will show up in the databases list in his MySQL client.
Reset Project - you can discard existing GoodData project and automatically recreate new one. Existing Writer’s configuration will be reused.
Turn Maintenance On - if you turn it on, all communication with GoodData API will be paused.
A new Elasticsearch Writer and Extractor came live, enabling you to query your data in real time using all Elasticsearch’s endless possibilities.