Action Required: Facebook Insights Extractor Token

Facebook might have silently dropped permissions (read_insights and manage_pages) to the token provided in the Facebook Insights Extractor (ex-facebook). Your project might be missing insights data. Here are two simple steps to verify and fix this issue.

Verify your token's permissions

Paste the token(s) you have stored in the configuration into the debugger tool https://developers.facebook.com/tools/debug/ and look for the permissions in the results. If both read_insights and manage_pages are present in the Scopes section, your token is valid.

If not, please create a new the token.

Create a new token

Go to https://syrup.keboola.com/ex-facebook/token and authorize our extractor with access to your account. You'll get a new token that you can paste back to your configuration.

Backfill

If you have any gaps in your insights data, you can increase the overlap periods to automatically backfill the missing data or specify the period manually. See documentation for more details, if you're stuck, get in touch with us here in the comments section or at support@keboola.com

Elasticsearch Failure - Components not working

Our Elasticsearch Syrup cluster is not responding. This cluster stores all information for all components' jobs. Storage works fine, but the rest of the system came to a halt. We're investigating this issue.

UPDATE 10:30pm PST / 7:30am CEST: Cluster is back online, all operations resumed or restarted. We're sorry for any inconvenience.

Storage API Client For R

Want to play with your KBC data in your local R environment?

Install the keboola-sapi-r-client and you can.  
(The package is on GitHub so it is installed via the devtools package) 

install.packages("devtools")
library(devtools)

We need to install a github dependency
for aws request signature generation

devtools::install_github("cloudyr/aws.signature")

Now we can install the Storage api client and load it into our R session

devtools::install_github("keboola/sapi-r-client")
library(keboola.sapi.r.client)

Just like any other R package, once installed, it can be invoked in any future session with the library() command.

To instantiate the client just give it a KBC token.
We'll use the token for the currency exchange rates for demonstration purposes.

client <- SapiClient$new('452-33945-de5bb7fecb818901f0834b2431564003296a4b05')

Now we can import data to our R session

currencyData <- client$importTable('in.c-ex-currency.rates')

Just for fun, let's make a simple plot of EUR vs USD using the ggplot2 library
if not installed on your R use install.packages("ggplot2")

# prepare our data
eurVsUsd <- currencyData[which(currencyData$toCurrency == "USD"),]
eurVsUsd$date <- as.Date(eurVsUsd$date)

# load the libraries needed to make our plot
library(ggplot2)
library(scales) # for prettier x-axis labeling

p <- ggplot(eurVsUsd, aes_string(x="date", y="rate")) + geom_point()
# add x-axis scaling and title
p <- p + scale_x_date(breaks="1 year", labels=date_format("%Y"))
p <- p + ggtitle("EUR vs USD")
print(p)

The code for this sample is here in this gist

The Storage API client gives full read and write access to your KBC project within the comforting power of your local R environment.

Imagine the possibilities!

* small print *  This is a development tool in Beta, use at your own risk!

MySQL Transformation input mapping size limit

We'll be introducing a limit on size of tables that are imported in a MySQL transformation. 

Why? Processing large tables in MySQL is very ineffective and slow, and it also negatively affects other users in the shared MySQL environment. To ensure your smooth user experience for everyone we'll be pushing all large transformations to a faster backend (Redshift and possibly also some others in the future).

This is an addition to query time limit, which focuses on (accidentally) unoptimized queries.

There will be two limits. A lower soft limit will warn you that you're exceeding the limit, but won't stop the transformation. A higher hard limit will stop the transformation immediately. Soft limit is just a warning, that you're processing larger amounts of data. You should take action only if you're getting close to the hard limit.

What to do, if you're exceeding the limit? There are few easy things to avoid breaking these limits:

  • Incremental processing. Set up your pipeline as incremental and do not process all data every run. The limit measures only transferred data, not the whole table size. 
  • Move the transformation to Redshift and the relevant storage buckets as well. There are no such limits on Redshift. It's just way faster.

The soft limit is already in place and its size is 2GB (2147483648 bytes). You can find the warnings in your Event list by searching for "We recommend using Redshift for tables larger than 2147483648 bytes.".

The hard limit will be introduced on July 1st and the size will be 5GB (5368709120 bytes). On June 1st we will notify all affected users before this policy will come in place and will try to help finding a feasible solution.

Infrastructure issues

We are investigating infrastructure issues affecting most of extractors and writers. Thanks for your patience.

UPDATE 0:33am PST / 9:33 CEST: Issue has been resolved, we'll try to restart all failed orchestrations, if we miss anything, please feel free to restart by yourself. Sorry for any inconvenience.

GoodData Writer issues

We have been fixing multiple errors regarding proper handling of GoodData maintenance during Saturday which could cause failing of some Writer's jobs. All problems were solved and shouldn't appear again. We apologize for any inconvenience.