Skip to content

Commit 5cfa609

Browse files
authored
Merge pull request #1192 from romainx/feat_spark_install
Improve spark installation
2 parents 18d2304 + 1dca391 commit 5cfa609

File tree

3 files changed

+17
-39
lines changed

3 files changed

+17
-39
lines changed

docs/using/specifics.md

Lines changed: 7 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,6 @@ You can build a `pyspark-notebook` image (and also the downstream `all-spark-not
1616
* `spark_version`: The Spark version to install (`3.0.0`).
1717
* `hadoop_version`: The Hadoop version (`3.2`).
1818
* `spark_checksum`: The package checksum (`BFE4540...`).
19-
* Spark is shipped with a version of Py4J that has to be referenced in the `PYTHONPATH`.
20-
* `py4j_version`: The Py4J version (`0.10.9`), see the tip below.
2119
* Spark can run with different OpenJDK versions.
2220
* `openjdk_version`: The version of (JRE headless) the OpenJDK distribution (`11`), see [Ubuntu packages](https://packages.ubuntu.com/search?keywords=openjdk).
2321

@@ -27,47 +25,25 @@ For example here is how to build a `pyspark-notebook` image with Spark `2.4.6`,
2725
# From the root of the project
2826
# Build the image with different arguments
2927
docker build --rm --force-rm \
30-
-t jupyter/pyspark-notebook:spark-2.4.6 ./pyspark-notebook \
31-
--build-arg spark_version=2.4.6 \
28+
-t jupyter/pyspark-notebook:spark-2.4.7 ./pyspark-notebook \
29+
--build-arg spark_version=2.4.7 \
3230
--build-arg hadoop_version=2.7 \
33-
--build-arg spark_checksum=3A9F401EDA9B5749CDAFD246B1D14219229C26387017791C345A23A65782FB8B25A302BF4AC1ED7C16A1FE83108E94E55DAD9639A51C751D81C8C0534A4A9641 \
34-
--build-arg openjdk_version=8 \
35-
--build-arg py4j_version=0.10.7
31+
--build-arg spark_checksum=0F5455672045F6110B030CE343C049855B7BA86C0ECB5E39A075FF9D093C7F648DA55DED12E72FFE65D84C32DCD5418A6D764F2D6295A3F894A4286CC80EF478 \
32+
--build-arg openjdk_version=8
3633

3734
# Check the newly built image
38-
docker images jupyter/pyspark-notebook:spark-2.4.6
39-
40-
# REPOSITORY TAG IMAGE ID CREATED SIZE
41-
# jupyter/pyspark-notebook spark-2.4.6 7ad7b5a9dbcd 4 minutes ago 3.44GB
42-
43-
# Check the Spark version
44-
docker run -it --rm jupyter/pyspark-notebook:spark-2.4.6 pyspark --version
35+
docker run -it --rm jupyter/pyspark-notebook:spark-2.4.7 pyspark --version
4536

4637
# Welcome to
4738
# ____ __
4839
# / __/__ ___ _____/ /__
4940
# _\ \/ _ \/ _ `/ __/ '_/
50-
# /___/ .__/\_,_/_/ /_/\_\ version 2.4.6
41+
# /___/ .__/\_,_/_/ /_/\_\ version 2.4.7
5142
# /_/
5243
#
53-
# Using Scala version 2.11.12, OpenJDK 64-Bit Server VM, 1.8.0_265
54-
```
55-
56-
**Tip**: to get the version of Py4J shipped with Spark:
57-
58-
* Build a first image without changing `py4j_version` (it will not prevent the image to build it will just prevent Python to find the `pyspark` module),
59-
* get the version (`ls /usr/local/spark/python/lib/`),
60-
* set the version `--build-arg py4j_version=0.10.7`.
61-
62-
```bash
63-
docker run -it --rm jupyter/pyspark-notebook:spark-2.4.6 ls /usr/local/spark/python/lib/
64-
# py4j-0.10.7-src.zip PY4J_LICENSE.txt pyspark.zip
65-
# You can now set the build-arg
66-
# --build-arg py4j_version=
44+
# Using Scala version 2.11.12, OpenJDK 64-Bit Server VM, 1.8.0_275
6745
```
6846

69-
*Note: At the time of writing there is an issue preventing to use Spark `2.4.6` with Python `3.8`, see [this answer on SO](https://stackoverflow.com/a/62173969/4413446) for more information.*
70-
7147
### Usage Examples
7248

7349
The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images support the use of [Apache Spark](https://spark.apache.org/) in Python, R, and Scala notebooks. The following sections provide some examples of how to get started using them.

pyspark-notebook/Dockerfile

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@ USER root
1616
ARG spark_version="3.0.1"
1717
ARG hadoop_version="3.2"
1818
ARG spark_checksum="E8B47C5B658E0FBC1E57EEA06262649D8418AE2B2765E44DA53AAF50094877D17297CC5F0B9B35DF2CEEF830F19AA31D7E56EAD950BBE7F8830D6874F88CFC3C"
19-
ARG py4j_version="0.10.9"
2019
ARG openjdk_version="11"
2120

2221
ENV APACHE_SPARK_VERSION="${spark_version}" \
@@ -39,14 +38,17 @@ RUN wget -q $(wget -qO- https://www.apache.org/dyn/closer.lua/spark/spark-${APAC
3938
rm "spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz"
4039

4140
WORKDIR /usr/local
42-
RUN ln -s "spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}" spark
4341

4442
# Configure Spark
4543
ENV SPARK_HOME=/usr/local/spark
46-
ENV PYTHONPATH="${SPARK_HOME}/python:${SPARK_HOME}/python/lib/py4j-${py4j_version}-src.zip" \
47-
SPARK_OPTS="--driver-java-options=-Xms1024M --driver-java-options=-Xmx4096M --driver-java-options=-Dlog4j.logLevel=info" \
44+
ENV SPARK_OPTS="--driver-java-options=-Xms1024M --driver-java-options=-Xmx4096M --driver-java-options=-Dlog4j.logLevel=info" \
4845
PATH=$PATH:$SPARK_HOME/bin
4946

47+
RUN ln -s "spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}" spark && \
48+
# Add a link in the before_notebook hook in order to source automatically PYTHONPATH
49+
mkdir -p /usr/local/bin/before-notebook.d && \
50+
ln -s "${SPARK_HOME}/sbin/spark-config.sh" /usr/local/bin/before-notebook.d/spark-config.sh
51+
5052
USER $NB_UID
5153

5254
# Install pyarrow

pyspark-notebook/test/test_spark.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,19 +12,19 @@ def test_spark_shell(container):
1212
tty=True,
1313
command=['start.sh', 'bash', '-c', 'spark-shell <<< "1+1"']
1414
)
15-
c.wait(timeout=30)
15+
c.wait(timeout=60)
1616
logs = c.logs(stdout=True).decode('utf-8')
1717
LOGGER.debug(logs)
18-
assert 'res0: Int = 2' in logs
18+
assert 'res0: Int = 2' in logs, "spark-shell does not work"
1919

2020

2121
def test_pyspark(container):
2222
"""PySpark should be in the Python path"""
2323
c = container.run(
2424
tty=True,
25-
command=['start.sh', 'python', '-c', '"import pyspark"']
25+
command=['start.sh', 'python', '-c', 'import pyspark']
2626
)
2727
rv = c.wait(timeout=30)
28-
assert rv == 0 or rv["StatusCode"] == 0
28+
assert rv == 0 or rv["StatusCode"] == 0, "pyspark not in PYTHONPATH"
2929
logs = c.logs(stdout=True).decode('utf-8')
3030
LOGGER.debug(logs)

0 commit comments

Comments
 (0)