Guides
Run Python code
֍
Learn how to run Python code between nodes using RunPython

In this guide, we'll show you how run custom code between two LLM calls.

This pipeline is a simple example of "function calling" or "tool use" on Substrate. For advanced situations, we can use the ComputeJSON node to generate a function call, and the If node to conditionally execute branches of a graph.

A JS code interpreter is coming soon. We're also designing the right abstractions for code execution in a stateful container. Let us know (opens in a new tab) if you're interested!

Before we begin, we'll define a function that runs an image search on serper.dev (opens in a new tab). Later, we'll use RunPython to run this locally defined function in a remote Python sandbox.

Python

def image_search(query: str, api_key: str):
import base64
import http.client
import json
import requests
conn = http.client.HTTPSConnection("google.serper.dev")
payload = json.dumps({"q": query})
headers = {'X-API-KEY': api_key, 'Content-Type': 'application/json'}
conn.request("POST", "/images", payload, headers)
res = conn.getresponse()
data = res.read()
res_json = json.loads(data.decode("utf-8"))
for image in res_json["images"]:
image_url = image["thumbnailUrl"]
try:
response = requests.get(image_url, timeout=10)
response.raise_for_status()
content_type = response.headers.get('content-type', 'image/jpeg')
image_base64 = base64.b64encode(response.content).decode('utf-8')
print(image_url)
return f"data:{content_type};base64,{image_base64}"
except requests.RequestException:
continue
return None

Now begins our Substrate pipeline. First, we ask an LLM to generate a unique image search query for a simple prompt "bowl of fruit". The LLM comes up with interesting queries, like "vintage macabre still life fruit bowl".

Python

prompt = "bowl of fruit"
query = ComputeText(
prompt=f"""We're searching for images on the internet. Return a query to
find an unusual, interesting photo of the following topic. Just return
the query to enter in the search box, no preamble or explanation.
TOPIC: {prompt}""")

Next, we call RunPython with the search function we defined above. We provide the query generated by the LLM along with API credentials, and we install the requests package in the container.

Python

search = RunPython(function=image_search,
kwargs={
"query": query.future.text,
"api_key": os.environ.get("SERPER_API_KEY")
},
pip_install=["requests"])

Our search function returns a small thumbnail image. In the final two steps of this pipeline, we upscale the image using UpscaleImage, and then "inpaint" it (filling in details following the prompt) using InpaintImage. Finally, we call substrate.run with the terminal node, inpaint.

Python

upscale = UpscaleImage(image_uri=search.future.output,
prompt=sb.format("{query}, high quality detailed",
query=query.future.text))
inpaint = InpaintImage(image_uri=upscale.future.image_uri,
prompt=sb.format("{query}, high quality detailed",
query=query.future.text),
store="hosted")
res = substrate.run(inpaint)

thumbnail
Thumbnail from image search
final result
Final result

The final result looks quite good, but has lost some details from the original thumbnail image. By switching to StableDiffusionXLInpaint and lowering the strength parameter, we can produce a result closer to the original image.

You can try running this example yourself by forking on Replit: