Apache Spark
(New in version 0.13.0)
The Spark Integration adds support for the Python API for Apache Spark, PySpark.
This integration is experimental and in an alpha state. The integration API may experience breaking changes in further minor versions.
Driver
The spark driver integration is supported for Spark 2 and above.
To configure the SDK, initialize it with the integration before you create a SparkContext
or SparkSession
.
import sentry_sdk
from sentry_sdk.integrations.spark import SparkIntegration
if __name__ == "__main__":
sentry_sdk.init("https://examplePublicKey@o0.ingest.sentry.io/0", integrations=[SparkIntegration()])
spark = SparkSession\
.builder\
.appName("ExampleApp")\
.getOrCreate()
...
Worker
The spark worker integration is supported for Spark 2.4.x. It is not supported for Spark 3.
Create a file called sentry-daemon.py
with the following content:
sentry-daemon.py
import sentry_sdk
from sentry_sdk.integrations.spark import SparkWorkerIntegration
import pyspark.daemon as original_daemon
if __name__ == '__main__':
sentry_sdk.init("https://examplePublicKey@o0.ingest.sentry.io/0", integrations=[SparkWorkerIntegration()])
original_daemon.manager()
In your spark_submit
command, add the following configuration options so the spark clusters can use the sentry integration.
Command Line Options | Parameter | Usage |
---|---|---|
--py-files | sentry_daemon.py | Sends the sentry_daemon.py file to your Spark clusters |
--conf | spark.python.use.daemon=true | Configures Spark to use a daemon to execute it's Python workers |
--conf | spark.python.daemon.module=sentry_daemon | Configures Spark to use the sentry custom daemon |
./bin/spark-submit \
--py-files sentry_daemon.py \
--conf spark.python.use.daemon=true \
--conf spark.python.daemon.module=sentry_daemon \
example-spark-job.py
Behavior
- You must have the sentry python sdk installed on all your clusters to use the Spark integration. The easiest way to do this is to run an initialization script on all your clusters:
easy_install pip
pip install --upgrade sentry-sdk
In order to access certain tags (
app_name
,application_id
), the worker integration requires the driver integration to also be active.The worker integration only works on UNIX-based systems due to the daemon process using signals for child management.
Google Cloud Dataproc
This integration can be set up to be used with Google Cloud Dataproc. It is recommended that Cloud Dataproc image version 1.4 be used as it comes with Spark 2.4 (required by the worker integration).
Set up an Initialization action to install the
sentry-sdk
on your Dataproc cluster.Add the driver integration to your main python file submitted in in the job submit screen
Add the
sentry_daemon.py
under Additional python files in the job submit screen. You must first upload the daemon file to a bucket to access it.Add the configuration properties listed above,
spark.python.use.daemon=true
andspark.python.daemon.module=sentry_daemon
in the job submit screen.
- Package:
- pypi:sentry-sdk
- Version:
- 2.12.0
- Repository:
- https://github.com/getsentry/sentry-python