Some GoodData Writer loads may have failed with weird "Target model is invalid" errors in recent few hours. This had been caused by a bug in data modelling part of the application which we already fixed. We are sorry for any inconveniences, it should work as expected now.
There are some under the hood improvements to GoodData Writer which allow you to rename datasets already exported to GoodData and freely rename attributes and facts (to this moment GoodData Writer has been automatically adding name of dataset to attribute/fact title).
Also there is possibility to change default GoodData identifiers for datasets and their columns. This is suitable especially for projects migrated from CloudConnect, otherwise you won't need this feature probably.
This improvement is available immediately for new writers but existing ones will be migrated in several waves during upcoming days.
Date dimensions management in GoodData Writer gets improved UI. In addition you can specify custom GoodData identifiers and templates for your dimensions.
Listing of users (GET /users), projects (GET /projects) and project users (GET /project-users) has been limited to 10000 results. To this moment listing of much more results didn't work anyway, Writer returned error instead of the listing. So if your writer contains more results and you have a script which process them, you will need to update it to support pagination. It works using parameters offset and limit (see docs for more information)
Parameter tableId
of data loading API calls /gooddata-writer/load-data
and /gooddata-writer/load-data-multi
is optional now. If the parameter is not present, data load will be performed to all active tables (i.e. tables with flag export=1
). See Apiary docs
Support in Orchestrator UI is being prepared. When it is ready you will be able to replace /gooddata-writer/upload-project
call with one of these to speed up the loads and avoid unnecessary model updates.
Today, Oct 7 2014, Keboola is announcing an End-of-Life date of Oct 31, 2014 for all GoodData Writers having CL Tool as a default modelling interface.
GoodData Writers, that are currently using CL Tool, will be seamlessly migrated to GoodData LDM API without prior notification. In case of any hiccups, contact our support, please.
Several projects may have experienced errors in data loads described as Csv handling fails during the night and morning. The problem is fixed and should not occur anymore. We apologize for any inconveniences.
GoodData Writer experiences occurring network errors during download of csv files from storage and that cause failures of some load data jobs. We are working on fix. Thanks for patience.
Update: It seems that the network errors ceased. In addition we published a fix which should properly deal with such errors in the future. We apologize for any inconvenience.
We have deployed improvements to Mandatory User Filters which need a change in writers configuration. Your configurations will be migrated automatically but here is a list of changes:
Table filters contained columns name, attribute, element, operator, uri and now contains only name, attribute, operator, value (element has been renamed to value and uri moved tofilters_projects table)
Table filters_projects contained columns filterName, pid and now contains uri, filter, pid
Table filters_users contained columns filterName, userEmail and now contains id, filter, email (id is generated by Writer and is unique for each combination of filter name and email)
Upload of a table (API call /upload-table) now produces two separate jobs in queue, one for model update and one for data load. You can even call those jobs separately via API calls/update-ldm and /load-data, see documentation
Update: API call /load-data in addition accepts list of multiple tables to upload