Blocking SELECT queries in transformations

A new version of Transformation API was released today with a new feature - we're blocking all SELECT queries. 

These queries do not perform any real operation to your data (if not accompanied with CREATE TABLE or CREATE VIEW) and caused our servers to load the result in memory. If your transformation contains a pure SELECT query, it will fail with a Query not valid error message. 

The fix is easy - delete or comment the SELECT query, it won't effect your transformation.

Thanks for your understanding. 

New MySQL server for DB Writer

We have launched a new MySQL server for DB Writer. All current credentials (both for reading and writing) are now obsolete, if you have any applications connecting to the MySQL database provided by DB Writer please update the credentials from the writer's page. 

Loading binary files in R transformations

If you want to load some data in your R transformation but it can't be stored in a table, you can now use Saved Files feature. This allows you to specify multiple tags and for each tag the engine will download the latest stored file in File uploads with this tag. If no file is found, the transformation will fail. The files are then stored in /data/in/user/{tag} and the manifest files in in /data/in/user/{tag}.manifest.

This comes extremely handy when you externally pre-generate binary data for a transformation (model, bucketing criteria). You just upload the file to the File upload and assign a certain tag, that you will then use in a transformation. 


AWS SQS: System wide outage [RESOLVED]

Amazon Simple Queue Service is reporting increased error rate in our main region. This affects majority of our applications and APIs. We hope for a quick fix, please bear with us, we'll post any updates here.

UPDATE 11:41pm PST / 08:41 CEST: We've migrated Storage API to SQS in a different region, so it's fully functional now. We're working on migrating all other APIs and apps to a different region as well.

UPDATE 00:05am PST / 09:05 CEST: All components should be working now, we're migrating process terminating queues (this bit is not working yet). You can restart your failed jobs. 

UPDATE 00:52am PST / 09:52 CEST: Everything is back up and working normally. We're sorry for any inconvenience, you can now restart failed orchestrations and terminate all stuck or waiting processes.

Misconfigured MySQL transformation server

On Wednesday at 9.15am PST We incorrectly configured sql_mode on the MySQL transformation server to value STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION

This has caused many errors in you transformations and orchestrations - mostly with messages like

  • Invalid datetime format: 1292 Truncated incorrect datetime value: ...
  • String data, right truncated: 1406 Data too long for column ...
  • Incorrect decimal value: '' for column ...
  • Incorrect integer value: '' for column ...

We identified the error and fixed it on Thursday at 12.20am PST

We are sorry for this inconvenience, we're working on setting up the environment so this error can't happen again. We tried to rerun all failed orchestrations, but if there's anything left, please restart the orchestration, everything should run correctly now.

Action Required: Facebook Insights Extractor Token

Facebook might have silently dropped permissions (read_insights and manage_pages) to the token provided in the Facebook Insights Extractor (ex-facebook). Your project might be missing insights data. Here are two simple steps to verify and fix this issue.

Verify your token's permissions

Paste the token(s) you have stored in the configuration into the debugger tool https://developers.facebook.com/tools/debug/ and look for the permissions in the results. If both read_insights and manage_pages are present in the Scopes section, your token is valid.

If not, please create a new the token.

Create a new token

Go to https://syrup.keboola.com/ex-facebook/token and authorize our extractor with access to your account. You'll get a new token that you can paste back to your configuration.

Backfill

If you have any gaps in your insights data, you can increase the overlap periods to automatically backfill the missing data or specify the period manually. See documentation for more details, if you're stuck, get in touch with us here in the comments section or at support@keboola.com

Elasticsearch Failure - Components not working

Our Elasticsearch Syrup cluster is not responding. This cluster stores all information for all components' jobs. Storage works fine, but the rest of the system came to a halt. We're investigating this issue.

UPDATE 10:30pm PST / 7:30am CEST: Cluster is back online, all operations resumed or restarted. We're sorry for any inconvenience.

MySQL Transformation input mapping size limit

We'll be introducing a limit on size of tables that are imported in a MySQL transformation. 

Why? Processing large tables in MySQL is very ineffective and slow, and it also negatively affects other users in the shared MySQL environment. To ensure your smooth user experience for everyone we'll be pushing all large transformations to a faster backend (Redshift and possibly also some others in the future).

This is an addition to query time limit, which focuses on (accidentally) unoptimized queries.

There will be two limits. A lower soft limit will warn you that you're exceeding the limit, but won't stop the transformation. A higher hard limit will stop the transformation immediately. Soft limit is just a warning, that you're processing larger amounts of data. You should take action only if you're getting close to the hard limit.

What to do, if you're exceeding the limit? There are few easy things to avoid breaking these limits:

  • Incremental processing. Set up your pipeline as incremental and do not process all data every run. The limit measures only transferred data, not the whole table size. 
  • Move the transformation to Redshift and the relevant storage buckets as well. There are no such limits on Redshift. It's just way faster.

The soft limit is already in place and its size is 2GB (2147483648 bytes). You can find the warnings in your Event list by searching for "We recommend using Redshift for tables larger than 2147483648 bytes.".

The hard limit will be introduced on July 1st and the size will be 5GB (5368709120 bytes). On June 1st we will notify all affected users before this policy will come in place and will try to help finding a feasible solution.

Docker bundle enhancements

We're excited to announce new features in Docker bundle. 

For those who don't know that is a component, that allows anyone to run apps encapsulated in Docker to run in Keboola Connection.

Streaming Logs

If your app writes to stdout or stderr these logs are immediately forwarded to Storage API Events, so you can notify about important events in your app live.

More abut streamed logs in the documentation.

Incremental File Processing

In a scenario, where you're processing unknown number of files on a regular basis, incremental file processing comes in handy. Successfully processed files get tagged and are excluded on the next run. 

More about incremental file processing in the documentation.

Development and troubleshooting API calls

We added sandbox, input and dry-run API calls to Docker bundle. They are similar to the counterparts from Transformation API and allow you to

  • prepare data and serialized configuration file for your application before you start developing the app, so you don't have to prepare the folder structure manually (sandbox)
  • see exactly what data comes in to your application (input)
  • see the data input and output of your app (dry-run)

The data is compressed in a ZIP file and stored in File Uploads in the given project.

More about these API calls in the documentation.

Want to know more, interested in developing your own apps in KBC? Read more in the documentation or get in touch with support@keboola.com.