Increased Error Rate of Components Using Specific Processor

There was an increased error rate for components using keboola.processor-create-manifest in configurations. This error affected only component configurations which were using empty enclosure in CSV Settings.

Affected component may include data sources like AWS S3, FTP, etc.

We are sorry for any inconvenience. Please feel free to restart your jobs manually.

2022-06-20 16:44 UTC - We are investigating increased error rate of some component using keboola.processor-create-manifest.

2022-06-21 07:22 UTC - We identified a root cause and continue working on a fix.

2022-06-21 12:30 UTC - Incident is resolved, last occurrence of the error was at 12:13 UTC.

Delayed processing of job in AWS EU stack

2022-02-01 09:10 UTC We are experiencing number of jobs in waiting state more than usual. We are investigating the issue.

2022-02-01 10:02 UTC There was increased traffic on the EU Snowflake warehouse so that we upgraded it to larger instance and the queued jobs were immediately processed. The delay is fixed, no jobs are waiting for processing right now.

Increased error rate for components communicating with Google APIs

12.11.2021 10:43 CET

We are experiencing increased error rate for components communication with Google APIs

Google reports there are several services disruptions:

We continue to monitor the situation.

12.11.2021 14:11 CET

We no longer see increased component failures, everything is working as expected.

Keboola-provided credentials for Snowflake and Redshift database writers

When configuring a Snowflake or Redshift database writer, you can use a Keboola-provided database.

In the past, when you selected this option, the credentials were stored in a configuration in a plain-text format. Storing the credentials this way allowed you to copy the password and use it in your favorite database client (or another system) even if you didn't copy it right after the creation.

To improve the overall security, we decided to show you your password only once and store it encrypted. From now on, when you create a new Keboola-provided database (Snowflake or Redshift), you will see the password only once, right after its creation.

Backward compatibility

The existing credentials will remain untouched. But if you delete them, there's no option to create them the old way.

Week in review -- June 29th, 2020

New Features and Updates

Project Description

Project description is no longer in a read-only mode; you can modify it to fit your needs.

Looker Writer Connection Name

Deprecation of Storage API .NET Client

We decided to deprecate the old and no longer maintained .NET version of the Storage API client. As a replacement we recommend you one of the supported Storage API clients.

Renaming Storage Buckets and Tables

There's a separate post explaining this new feature.

Selecting Bucket in Input Mapping

You can select a whole bucket when adding new tables to Input Mapping. This was originally enabled only for transformations; now you can use this feature for all remaining components.

Bug Fixes

  • Generic Extractor no longer stops after the 2nd page when downloading data in child jobs (only configurations with Limit Stop setting were affected)
  • CSV import component supports full load again (due to a bug, all imports were performed incrementally).
  • MySQL writer no longer writes an "empty string" instead of a null for columns with DATE and DATETIME data types.

New Components

  • CSOB CEB extractor for downloading bank statements from the CSOB CEB Business Connector service
  • Azure Blob Storage writer for exporting any input CSV files into designated Blob containers
  • Sisense writer for sending tables from Keboola Connection to a Sisense database platform
  • Zendesk writer for creating and updating Zendesk Support properties with the flexibility of defining their own parameters

Renaming Storage Buckets and Tables

An option to rename buckets and tables was one of the most requested features on our wishlist. It is very useful when you want to name your bucket by its contents (e.g., "email-orders") rather than "in.c-keboola-ex-gmail-587163382".

From now on, you'll be able to change the names of buckets and tables.

Rename Bucket

To rename a bucket, navigate to the bucket detail page, and click the pen icon next to the name parameter.

Then choose the name of your preference (there are some limitations though).

Rename Table

To rename a table, navigate to the table detail page, and click the pen icon next to the name parameter.

Then choose the name of your preference (the same limitations apply).

Consequent Changes

Despite the fact that adding the option to rename a bucket or a table does not look like a very big deal, we had to make some substantial changes under the hood. Some of the consequences are worth mentioning here:

Hidden "c-" prefix

We no longer show the "c-" prefix in the names of buckets and tables. It is still a part of the bucket and table ID, but the ID is no longer displayed in most cases. If you need to access the ID for some reason, it is still available on the detail page of each bucket and table.

This is an example of how buckets and tables are displayed without the "c-" prefix:

Stage Selector

When searching for a specific bucket or table, just select a stage and the buckets will be filtered by the selected stage.

Database Writers with Configuration Rows support

We're happy to announce the arrival of Configuration Rows, our new powerful configuration format, to database writers.

From now on, you'll see a migration button in the configuration detail of each database writer (Snowflake, MySQL, SQL Server, Oracle, PostgreSQL, Impala, Hive, and Redshift).

Just click Migrate Configuration and the configuration will be migrated to the new format.

After the migration, you'll see more information about each table. All tables can be easily reordered, so you can move more important tables to the top and they will be uploaded first.

Also, you will be able to see information about each table on a new table detail page, with Last runs and Versions in a sidebar.

Underlying Important Changes

While there were certain limitations in the old configuration format, this is no longer true in the new "rows format".

The following features are worth mentioning:

  • Disabled tables will no longer be exported from Storage (previously, they were exported with limit=1 and not used in the database writer).
  • Each table has its own state with information about the date/time of the last import (previously, an upload of a single table cleared the state for other tables).


Deprecation of public File Uploads

If you are uploading a file to Storage (manually or automatically), there's an option to upload it with the Public flag. The file can then be accessed publicly outside of Keboola Connection.

Only a minority of Keboola Connection users take advantage of this feature, and they do so in a very non-standard way (e.g., for HTML files). That's why we decided to deprecate it. Also, the new File Storage we have prepared (Azure Blob Storage) doesn't support public File Uploads, and we want to make this behavior consistent across all supported File Storage Backends.

The option to create Public Files from the UI has been removed (effective with the publication of this post).

The option to create Public Files via an API will be removed in about three months, by the end of June, 2020.

An alternative solution could be the AWS S3 Writer component, but we don't recommend relying on Public Files at all. Not even outside of Keboola Connection.