Today we are pleased to introduce a new component that will help you index your KBC data in Elasticsearch.
It is now available in our component list with our generic UI. Please check out the documentation on Github to help you get started.
Today we are pleased to introduce a new component that will help you index your KBC data in Elasticsearch.
It is now available in our component list with our generic UI. Please check out the documentation on Github to help you get started.
We have experienced a brief outage of Keboola Connection application and API between 12:42 and 12:45 UTC. Some jobs might have failed with an application exception.
We're sorry for this inconvenience, if not sure what was the cause of any failed job, please contact us atsupport@keboola.com.
We have experienced a brief network outage between our server nodes and AWS SQS between 15:41:59 and 15:42:22 UTC. Some jobs might have failed with an application exception.
We're sorry for this inconvenience, if not sure what was the cause of any failed job, please contact us at support@keboola.com.
The old generic extractor has been fully replaced by ex-generic-v2, and will no longer be supported.
All existing and actively used configurations have been migrated to the new extractor. If your project used the old version (configured in SAPI tables), you should have already been contacted and received configuration for the new extractor. If you use the old extractor and haven't received a message about it, please contact us at support@keboola.com
The old extractor will be taken down on 29.2.2016
Following an extensive testing with volunteer users (thank you all, you know who you are!), we will migrate transformation configuration storage for all projects. The migration will happen in several batches processed during a 5 days period next week. We'll be monitoring the process but we don't expect any failures, outages or other bugs.
Expected period of migration is Monday Feb 15th to Friday Feb 19th. Once your project is migrated, you'll get a notification (for each project in which you are an admin).
During the migration process all your transformations will be serialized and uploaded to your project's File uploads for safekeeping.
Why?
The current method of storing configuration in sys.tr-* buckets has become obsolete and practically prohibits new features. Storing transformation configuration natively in a dedicated configuration storage (part of Storage API) opens, besides having some immediate benefits, the door for new exciting features.
Benefits
Should you encounter any issues during the migration or in case you have any concerns/questions regarding your project, don't hesitate to contact us at support@keboola.com.
Today, 11.2.2016 between 11:50 - 12:50 CET we encountered short outage of our MySQL transformation server which at the time led to fail the running transformations and no new transformations could be started either. The server is now up and everything is running properly. We are sorry for any inconvenience.
We're adding a new channel to inform you about important events in Keboola Connection or any of your projects. Notifications show up as a bell icon next to your name or the Keboola logo.
When there's a new notification waiting for you the bell has a red badge. Clicking on the icon you'll go to the Notifications page, where you'll see all notifications. Unread notifications are highlighted and you can mark each notification as read or mark all notifications as read in a single click.
Notifications will inform you
It's not a Facebook feed, we promise we won't bother you too often.
We have added Python support to our Transformation engine. Python is a handy and versatile programming language. It also has a lot of useful libraries. Particularly interesting may be the SciPy stack. All Python transformations run in our public docker image with Python 3.5.1 and have an 8GB memory limit.
The interface is highly similar to the existing R transformations. You can start by setting input and output mapping in the UI. The tables from input mapping will be created as CSV files in in/tables directory. Result CSV files from out/tables directory will be uploaded to your project Storage. All your python code has to do is read the CSV file, do some magic and then write a CSV file.
If you need some packages from PyPI, you can list them in the packages section of the UI. By the way, the SciPy stack is installed by default.
If you are interested in writing Python transformations, we have an introduction article in documentation with some examples that show how to work with the input and output files.
Access to component configurations can be now granted to Storage API tokens.
Until now only tokens assigned to administrators or tokens with access to all buckets had access to component configurations.
Running empty transformations bucket will no longer throw an error.
Orchestrations can be now composed and executed by other orchestrations. The nesting level is limited to two levels, and deeper nesting will trigger an error on orchestration execution.
Recent issues with exceed client ids count were resolved.
DoubleClick Extractor has been updated to support version 2.3 of DoubleClick API.
Creating tables with large amount of columns caused internal error in Storage API. Now it produces a user error with proper error message. Current limits for number of columns are 4096 on MySQL backend and 1600 on Redshift.
We are progressively working on new version of GoodData Writer API. You can see it's current status in Apiary: http://docs.keboolagooddatawriterv2.apiary.io. This week we finished resource for handling user filters and SSO access to projects.