Cómo escribir un pyspark-dataframe a redshift?
Estoy tratando de escribir un pyspark DataFrame a Redshift, pero los resultados en el error:-
java.util.ServiceConfigurationError: org.apache.chispa.sql.fuentes.DataSourceRegister: Proveedor de org.apache.chispa.sql.avro.AvroFileFormat no podía ser instanciado
Causado por: java.lang.NoSuchMethodError: org.apache.spark.sql.execution.datasources.FileFormat.$init$(Lorg/apache/spark/sql/execution/datasources/FileFormat;)V
Spark Versión: 2.4.1
Chispa-enviar comando: spark-presentar --maestro local[*] --frascos ~/Downloads/spark-avro_2.12-2.4.0.jar,~/Downloads/aws-java-sdk-1.7.4.jar,~/Downloads/RedshiftJDBC42-no-awssdk-1.2.20.1043.jar,~/Downloads/hadoop-aws-2.7.3.jar,~/Downloads/hadoop-common-2.7.3.jar --paquetes com.databricks:spark-redshift_2.11:2.0.1,com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3,org.apache.hadoop:hadoop-common:2.7.3,org.apache.spark:spark-avro_2.12:2.4.0 script.py
from pyspark.sql import DataFrameReader
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
from pyspark.sql import SQLContext
from pyspark.sql.functions import pandas_udf, PandasUDFType
from pyspark.sql.types import *
import sys
import os
pe_dl_dbname = os.environ.get("REDSHIFT_DL_DBNAME")
pe_dl_host = os.environ.get("REDSHIFT_DL_HOST")
pe_dl_port = os.environ.get("REDSHIFT_DL_PORT")
pe_dl_user = os.environ.get("REDSHIFT_DL_USER")
pe_dl_password = os.environ.get("REDSHIFT_DL_PASSWORD")
s3_bucket_path = "s3-bucket-name/sub-folder/sub-sub-folder"
tempdir = "s3a://{}".format(s3_bucket_path)
driver = "com.databricks.spark.redshift"
sc = SparkContext.getOrCreate()
sqlContext = SQLContext(sc)
spark = SparkSession(sc)
spark.conf.set("spark.sql.execution.arrow.enabled", "true")
sc._jsc.hadoopConfiguration().set("fs.s3.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")
datalake_jdbc_url = 'jdbc:redshift://{}:{}/{}?user={}&password={}'.format(pe_dl_host, pe_dl_port, pe_dl_dbname, pe_dl_user, pe_dl_password)
"""
The table is created in Redshift as follows:
create table adhoc_analytics.testing (name varchar(255), age integer);
"""
l = [('Alice', 1)]
df = spark.createDataFrame(l, ['name', 'age'])
df.show()
df.write \
.format("com.databricks.spark.redshift") \
.option("url", datalake_jdbc_url) \
.option("dbtable", "adhoc_analytics.testing") \
.option("tempdir", tempdir) \
.option("tempformat", "CSV") \
.save()