The Databricks cluster needs to be updated to a later Okera release.
Use the below steps to copy the Okera jars to Databricks.
1. Log into a system that has access to the Okera repository and to the Databricks cluster.
2. Copy the jars from the Okera release repository to a local system with network access.
aws s3 cp s3://okera-release-useast/<release>/client/recordservice-hive.jar /tmp/
aws s3 cp s3://okera-release-useast/<release>/client/cerebro-hive-metastore.jar /tmp/
aws s3 cp s3://okera-release-useast/<release>/client/recordservice-spark-2.0.jar /tmp/
3. Attach to the Databricks cluster.
4. Copy the new jars to a staging directory in DBFS.
cp /tmp/recordservice-hive.jar /dbfs/databricks/okera_staging/
cp /tmp/cerebro-hive-metastore.jar /dbfs/databricks/okera_staging/
cp /tmp/recordservice-spark-2.0.jar /dbfs/databricks/okera_staging/
5. Test thoroughly.
6. Finally, copy the jars to the production location.
For more information regarding Okjera/Databricks integration, refer to the document located here.