Job Failures
There were jobs failures between 4 AM - 9 AM UTC caused by one of our servers dying in a horrible slow and painful death.
We're sorry for this inconvenience, we have restarted the affected orchestrations.
There were jobs failures between 4 AM - 9 AM UTC caused by one of our servers dying in a horrible slow and painful death.
We're sorry for this inconvenience, we have restarted the affected orchestrations.
Hello everyone,
Here's the list of the most important changes we made last week:
You can find it as GoodData Reports.
The old version of GoodData Extractor has been deprecated and we are preparing a migration tool to ease the transition to the new one.
Read more here http://status.keboola.com/new-version-of-gooddata-extractor.
It is now possible to create a project with a Promo Code. This can be useful for workshops or other events, where attendants will be using Keboola Connection. Please contact your Maintainer for more details.
Enjoy your week and Merry Christmas!
We have released a new version of GoodData extractor. It runs fully in our Docker infrastructure and so uses its whole potential. It is no longer dependent on GoodData Writer and calls GoodData API directly. It can either take credentials from a chosen writer or you can specify your own. Then you specify uri of reports you want to download as in the old version.
The old extractor is now deprecated and will be switched off by the end of January. A migration wizard which will help you with transition will appear in the old extractor ui later today.
Howdy everybody,
Here's some highlights from this last week:
Be sure to tune in again next week for more updates!
Here's the list of most important changes we made last week:
- Docker runner supports headless csv: If columns are specified in the output mapping(in manifest file or in configuration object) then the corresponding csv file is considered to be without a csv header. More details
- Docker runner supports sliced output csv files: More than one csv file can be mapped to one output table, moreover all such files will be uploaded to storage in parallel. This way files bigger than 5GB can be uploaded to storage. More details
- Little cherry on top: Adding new table in Database writer UI now maps the storage table name to the database table name and not the table id:
Happy Monday, Have a great week ahead!
We're encountering a series of "Found orphaned table manifest" errors in Generic Extractor. We have identified the root cause and reverting last changes to get it back to fully working state. We will restart all affected orchestrations.
We'll update this post when the fix will be deployed.
We're sorry for this inconvenience.
UPDATE 7:25pm CEST: The fix has been deployed to production, we're restarting all failed orchestrations.
Few orchestrations have failed around 13:20 CET. The problem was caused by temporarily unavailable jobs storage.
We're sorry for this inconvenience.
Here's the list of most important changes we made last week.
There were jobs failures between 10:30 AM - 12:50 PM caused by low disk space on one of the jobs workers.
We're sorry for this inconvenience.
It's my big pleasure to announce another major update on one of my first component I had ever build - YouTube Reporting API.
YouTube Reporting API offers a very simple way to download daily reports that belongs to a Content Owner of a Youtube Channel (in other words, your channel account must be authorised by Google if you want to download data from this API). These reports are generated by the defined jobs and all you need to do is to download these results. And this is something the extractor was build for.
As the general process is very simple, the first version of this extractor was completed in such a short time. However, while we were using the extractor in the production, we found that (from time to time) Google triggers some background actions leading into generating reports which broke the original logic and produced incorrect result (caused by logic related to merge operations). And for that reason the first version of this extractor was not super useful for the production deployment.
However, based on that experience I really wanted to fix the problematic parts of the original version and turn this extractor into the project which is fun to use. And I simply believe I made it and I am extremely proud of what I achieved in this update.
You can read the full description in the documentation. In a nutshell, this extractor downloads reports generated by jobs. However, there are lots of extra features which help you to manage these downloads in a very convenient way. For example, the original configuration requirements implemented in the first version of this extractor was reduced significantly and there were also added several options for creating a backup (S3). But most importantly, all data should be downloaded correctly.
This writer is developed independently by Blue Sky Media. For more information on how to use the writer, please refer to the documentation. In case you run into some issues or you have more questions, please contact me directly.