On Dec 16 between 5pm and 9pm CEST some MySQL and Redshift queries containing comments were malformed in our parser.
This bug has been resolved and deployed, all affected orchestrations were resumed. We're sorry for this inconvenience!
On Dec 16 between 5pm and 9pm CEST some MySQL and Redshift queries containing comments were malformed in our parser.
This bug has been resolved and deployed, all affected orchestrations were resumed. We're sorry for this inconvenience!
Tables are not automatically added to GoodData Writer UI any more and you have to add them there explicitly. Naturally you can also delete no longer needed tables. This improvement should benefit clarity and unites behaviour across other writers. Non-configured tables (i.e. those with all columns ignored) will be removed from your existing configurations soon by automated script.
There was a bug affecting insightsPages_pivoted
and insightsPosts_pivoted
configurations for pages or posts with no values in some metrics. Under certain situations the zero value was replaced with the value of 1.
To fix any affected data you can run the extractor with parameters to backfill a given period of time, eg.
{
"since":"2012-01-01",
"until":"2015-12-10"
}
Put these parameters in your orchestrator configuration or use a separate API call to create a job.
We're sorry for this bug and for any trouble caused, read more about Facebook Insights Extractor for additional info.
Sklik Extractor has been rewritten as Docker component and the old version will be switched off by the end of December. Till then its users are kindly requested to migrate their configuration to the new version. Here is how to do it:
Please note that the new Extractor saves data incrementally which means that you have to add primary keys if you want to fill data to existing tables. You can find the required format of data tables in documentation.
A few months ago I made my first contribution to Keboola Ecosystem and created a component to Keboola Connection that downloaded data from iTunes Connect. It was working well, but despite my limited knowledge of how to really work with Keboola Connection (and my limited knowledge of practical programming as well), the functionality of the component was quite limited and was possible to download just one type of report. Two days ago I decided to improve this extractor significantly and I can proudly share the results of this update.
From the business perspective the major update is related to possibility of downloading more types of reports. Last version allowed you to download only Sales data. Now is possible to extract Earnings data as well. There is also an automatic fiscal calendar generator (5-4-4) that make sure the download of the earnings data will be handled in the correct order. It's been generated dynamically and results of the fiscal calendar generator were validated against Apple calendar and data is set properly. Another improvement was about adding more options for downloading of Sales data. More types of grain is supported in the new version.
From the technical perspective, there is an avalanche of various improvements. In a nutshell, scripts were simplified by removing unnecessary logic (as my understanding of Keboola is better) & rewrote the handling of asynchronous flow completely. Files are also uploaded dynamically and for that reason a need for specifying static Table Output Mapping is removed completely. That helped to simplify the input parameters. Credentials are now encrypted.
Check the documentation & source code for more information. And in case of any question/issue, don't hesitate to contact me at my email (radek@bluesky.pro). I am happy to provide more details if you are interested in.
If you have long list of transformations or they're just quite complex, you may find yourself somewhat annoyed with the responsiveness of the UI. If that is the case, you will be delighted to know that we have solution ready to go that significantly improves the user experience there, maintaining the same functionality.
Before we roll out this feature to all projects we'd like to invite you to beta-test this feature. If you're crunching through a vast list of transformations, you can benefit from this today. The migration process is easy, let us know at support@keboola.com. We'll migrate your configuration for you. There are no changes in the UI or anywhere else, nothing will be stopped or lost. Numerous backups are made along the way and rollback is easy.
Thanks for participating!
We have finally updated the MySQL transformation server to MariaDB 5.5.44, the same version as the sandbox server.
Everything should be running as expected, no migration is required, but if you encounter any issues or incorrect behaviour, let us know.
Some data loads are failing due to a bug in WebDav storage on GoodData's site introduced in their Saturday release. They are working on fix, be patient please. Jobs affected by this problem fail with message "Csv file has not been uploaded to 'https://secure-di.gooddata.com/uploads/....csv'". We apologize for all the inconveniences.
Update 16:15 UTC: According to GoodData support this problem unfortunately won't be fixed earlier than on November 26 9:00 UTC. See their maintenance announcement
Access to GoodData projects from their writers in Keboola Connection has been changed. Once you enter a project in KBC you are not automatically added to any GoodData projects any more. Instead there is new button "Enable Access to Project" in each writer which needs to be explicitly confirmed first.
Getting actionable
insights from data is usually more discussed than actually implemented. In order to succeed in this
task, you need to combine three completely different worlds. Data, analytics
and business. Practically, it means creating a team that consists of someone
who knows the data, someone with the relevant analytical skill-set (call it
data science, machine learning or predictive modelling) and someone with the
domain knowledge. These people often speak different languages and use different
tools.
Thanks
to Keboola and the new aLook Analytics app, the main integration and collaboration issues are gone. The
beauty of the solution at hand lies in a simple integration of custom analytics into the standard data
processes. All the issues around data processing, data transfer and model deployment
have been already sorted out, and therefore the primary focus really is on
solving the business problem. No middle-man needed. Your data guy will deal
with data processing, our data scientist will train the best possible model in R and finally your business person will make
sure that you will be able to make more money using this new information.
The job of aLook Analytics is to provide the client with tailor-made predictive models developed in R that directly support their KPIs. Some specific examples of these models are:
We have
expertise in building such models in various industries – retail banking and
e-commerce, but also behavioral talent analytics and sport.