# Python with Apache Spark using Jupyter notebook

### Python with Apache Spark using Jupyter notebook

Now let’s run the Python version of pi program. Start Anaconda Navigator, select Virtual Environment spark

![](https://2100080250-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-M1PNTHVApkPePuMdTu3%2F-M1fhs0MTNdgwk0Nvtuh%2F-M1fiKSo5cmLjTh1wX99%2Fscala-17.jpg?alt=media\&token=21b9cbcd-44d2-4ea1-b6af-087954075925)

Click Jupyter Notebook

In the Jupyter Notebook, need to import findspark and run findspark.init(), which will find where the SPARK\_HOME points to.

Following is the Python script that runs pi.py, you can simply run:

python pi.py

```
#!/usr/bin/env python
# coding: utf-8
from __future__ import print_function
import findspark
findspark.init()
import sys
from random import random
from operator import add
from pyspark.sql import SparkSession
spark =SparkSession.builder.appName("PythonPi").getOrCreate()
partitions = 1
n = 100000 * partitions
def f(_):
    x = random() * 2 - 1
    y = random() * 2 - 1
    return 1 if x ** 2 + y ** 2 <= 1 else 0
count = spark.sparkContext.parallelize(range(1, n + 1), partitions).map(f).reduce(add)
print("Pi is roughly %f" % (4.0 * count / n))
spark.stop()
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://george-jen.gitbook.io/data-science-and-apache-spark/python-with-apache-spark-using-jupyter-notebook.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
