Storage Api Console now proceeds all non redshift tables exports ansychronously. Synchronous table export is deprecated except for redshift backend tables.
Storage Api Console now proceeds all non redshift tables exports ansychronously. Synchronous table export is deprecated except for redshift backend tables.
From now on you can use all OUT tables in all transformations. Previous restrictions on using OUT tables in Redshift transformations were removed.
Storage now supports snapshotting and creating tables from snapshots for Redshift backend. Snapshots are compatible between backends, so you can use snapshotting feature for migration of tables between backends.
Rollback is not supported for Redshift tables at the moment.
Parameter tableId
of data loading API calls /gooddata-writer/load-data
and /gooddata-writer/load-data-multi
is optional now. If the parameter is not present, data load will be performed to all active tables (i.e. tables with flag export=1
). See Apiary docs
Support in Orchestrator UI is being prepared. When it is ready you will be able to replace /gooddata-writer/upload-project
call with one of these to speed up the loads and avoid unnecessary model updates.
Google Drive and Google Analytics extractors now allow to run extraction of a single sheet/query added to the configuration.
Run extraction of a single query within a database extractor configuration.
Attach your data(a csv or gzipped csv file) and send it to a given email, the pigeon will check the inbox and import the received attachment into a storage api table. The whole work flow can be configured via Pigeon Importer UI app and then registered as a regular orchestration task.
Orchestrator's email notifications were redesigned.
If anything wrong happen, Orchestrator send you brief visual overview. All necessary details are accessible through UI. We're not spamming you by long list of logs anymore.
...were redesigned, so your debug scenario should work much smoothly. This is redesigned page with all Jobs and tasks details:
Today we're announcing new Storage API feature: Bucket Credentials (api here).
If you're using Keboola Connection w/ Redshift backend, you can have read-only credentials (direct sql access) to any Redshift bucket.
In Storage API Console, go to Bucket Detail > Credentials and press "Create new credentials" button:
Describe new credentials (you can have multiple credentials assigned to each bucket!):
When you create credentials, carefully copy&paste credentials to you SQL client or preferred remote service (jackdb.com, chartio.com, etc.). After closing displayed credentials, you can't display it's settings:
In case you need to re-use already created credentials, you have to delete it and create new combination of username and password. All existing credentials are listed under it's bucket:
WARNING: Always employ SSL when accessing your data. Generated credentials are opening your dedicated AWS Redshift Cluster. Please read "Configure Security Options for Connections". Redshift Cluster's CA certificate can be downloaded here.