GoodData API issues

Some older GoodData Writer configurations may experience fails of update model jobs which are accompanied by error message "You can not use a model that contains two or more facts identified by ids having same two tailing segments.". The problems are caused by some unexpected changes in GoodData's Project Model API after their Saturday's release. We are trying to fix the situation in cooperation with GoodData support. Thanks for your patience.

UPDATE (18:00 CET): The problem still isn't solved and apparently will take some more time. We will keep you updated. However you should be able to load data if you avoid updating of project model. It means use API calls load-data or load-data-multi instead of update-table and update-project (see API documentation)

UPDATE (Mar 22 17:30 CET): GoodData released a fix which prevents those failures of update model jobs. Writer should work without problem now. Thanks for your patience.

AdWords Exractor API Update

Extractor will update to API v201601 of AdWords API in the beginning of April. Please be sure to review your configuration so that it does not use no-longer supported metric names till the end of March. Also do not use new metrics until the update, we will let you know about it.

GoodData Writer configuration status update

By now, all writers are reading configurations of datasets, date dimensions and filters from Components Configurations API only. Corresponding tables in Storage API still exist and are updated even when they are not used for reading. 

Table users is not used for reading nor updating anymore and information about users created by writers are stored in Writer's backend exclusively. You can access the data using API:

Table filters_projects is not used anymore, GoodData URIs of the filters were moved to Component configuration to section filters.

Table filters_users is not used anymore, information about assigned filters to users is obtained directly from GoodData API. Notice that it brings implicit obsoletion of sync-filters API call. The API call still rebuilds filters-users relations according to the filters_users table but the table isn't updated.

Last tables actively used by Writers are projects and project_users and they will be migrated soon, probably this week. They will be moved to Writer's backend similarly to users table.

Please don't delete configuration tables yourself, whole buckets will be deleted automatically when they won't be used anymore, probably within two weeks.

New version of AdWords Extractor

AdWords Extractor has been rewritten as Docker component and the old version will be switched off by the end of March. Till then its users are kindly requested to migrate their configuration to the new version. Here is how to do it:

  1. Choose AdWords v2 from list of extractors
  2. Create new configuration and give it some name as usual
  3. Click on Authorize Account button which will redirect you to Google and ask for authorization to download your AdWords data.
  4. Add extraction configuration to the Parameters text area. It is in json format and must contain AdWords developer token, customer id, bucket where you want the data to be saved and configuration of queries. Optionally it may contain parameters since and until to specify dates interval for stats (it is "-1 day" i.e. "yesterday" by default). See documentation for more information.

Please note that the new Extractor saves data incrementally which means that you have to add primary keys if you want to fill data to existing tables.

Upcoming configuration changes in GoodData Writer

Our GoodData Writer is going to begin saving configurations to our Components Configurations in the Storage API, rather than the sys stage bucket and tables it uses now. This will bring a performance boost, reduce the API response time (i.e. improve KBC UI performance), improve the clarity of the configuration and bring better versioning and rollback possibilities. This change will apply for bucket attributes (containing GoodData credentials and project identifiers), date dimensions, datasets and filters, see the API docs for more information:

We will proceed in several steps. First, configurations of all writers will be automatically migrated to the Configuration storage in several waves over the upcoming days. You will be notified by the notifications system in KBC UI once the migration is complete. Once completed, every configuration change performed using the API (including KBC UI) will be written to both places. Please keep in mind however, that if you alter your writer’s configuration directly in sys bucket, these changes won't be synchronised to the new configuration and you will need to manage the synchronisation yourself.

In the second step, we will switch to reading configurations from the Configuration storage rather than the sys bucket. And in the third step, writing to the sys bucket will be stopped and sys bucket configuration tables (data_sets, date_dimensions and filters) will be deleted. We will inform you about each of these steps when they occur.

The remaining configuration tables (projects, users, project_users, filters_projects and filters_users) will not be changing for now, but in the long term they will be completely removed and their data will be only stored in the Writer's backend. So by the end of this journey, we will say goodbye to the whole configuration bucket in sys stage.

Week in Review – February 1, 2016

DoubleClick Extractor Update

DoubleClick Extractor has been updated to support version 2.3 of DoubleClick API.

Creating tables with large amount of columns bugfix

Creating tables with large amount of columns caused internal error in Storage API. Now it produces a user error with proper error message. Current limits for number of columns are 4096 on MySQL backend and 1600 on Redshift.

GoodData Writer API news

We are progressively working on new version of GoodData Writer API. You can see it's current status in Apiary: This week we finished resource for handling user filters and SSO access to projects.

Other posts this week

Some failing data uploads in GoodData Writer

In last days several errors with message like "Could not export table out.c-main.products from Storage API: Table Activities not found in bucket out.c-main." appeared unexpectedly. It is direct consequence of this change from December. When you call upload project or load-data and load-data-multi API calls without parameter tables, Writer takes all configured tables and tries to upload them. But when some table is missing from Storage API and still has configuration in the writer, this failure happens. And it didn't happen earlier because Writer automatically removed configuration for deleted SAPI tables.

But because this problem confused several of our clients, we decided to make this behaviour more comfortable. Now if you call upload project or load-data and load-data-multi without parameter tables, Writer will ignore configurations of non-existing tables and won't fail. However if you call load-data or load-data-multi with explicitly listed tables (in parameter tables) and some of these tables doesn't exist, the job will still fail.

We apologize for the confusion.