Python 3.7 Xenial build started to fail with “failed to map segment from shared object”

The Python peps repo recently started failing to build with cryptic shared library errors.

I have a PR that built successfully 3 days ago, but then failed post-merge a day or so ago:

(This isn’t a Python build - it’s using Python to run docutils and other scripts to render the PEPs as HTML)

Both PR builds since then have failed in the same way: https://github.com/python/peps/pulls?q=is%3Aopen+is%3Apr

binascii c-ext didn’t get compiled?
cc @BanzaiMan

Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/docutils/writers/__init__.py", line 140, in get_writer_class
    module = __import__(writer_name, globals(), locals(), level=1)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/docutils/writers/pep_html/__init__.py", line 18, in <module>
    from docutils.writers import html4css1
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/docutils/writers/html4css1/__init__.py", line 21, in <module>
    from docutils.writers import _html_base
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/docutils/writers/_html_base.py", line 23, in <module>
    import urllib.request, urllib.parse, urllib.error
  File "/opt/python/3.7.1/lib/python3.7/urllib/request.py", line 84, in <module>
    import base64
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/base64.py", line 11, in <module>
    import binascii
ImportError: /home/travis/virtualenv/python3.7.1/lib/python3.7/lib-dynload/binascii.cpython-37m-x86_64-linux-gnu.so: failed to map segment from shared object

I see that docutils was changed in between i.e.

Can you pin down docutils's version to 0.14 and let us know if it helps?

Thanks!

but the exception is clearly happening during binascii import time

Looking around the Net, “failed to map segment from shared object” is usually accompanied by an additional message specifying the reason. Since in this case, it isn’t and due to below, this looks like an out of memory condition. Use make -j$(nproc) instead of make -j.

1 Like

Thanks for the PR @native-api!

For anyone curious as to exactly what was changed, https://github.com/python/peps/pull/1131/files is the PR that fixed the problem.

The full PEP repo build is an embarassingly parallel problem (there are hundreds of PEPs, with no build time dependencies between them), so we were inadvertently spinning up hundreds of instances of docutils at once.

(I also suspect it was the fact that I added 2 PEPs right after each other that allowed both PRs to build: individually, there was just enough RAM to do everything in parallel, but together they tipped things over the limit)

@ncoghlan FYI: https://github.com/python/peps/pull/1131/files#r307711301

@webknjaz Good info, thanks.

For folks following the thread: make -j2 is enough to use all the cores allocated to current Travis virtual environments, while make -j$(nproc) will start as many processes as exist on the underlying physical machine (currently 32).

That’s not a problem in our particular case (32 docutils instances will barely make a dent in the available RAM - it took hundreds to exhaust it), but could be an issue for more memory hungry build scenarios.

UPD: @native-api and I did some testing and now Travis CI correctly reports 2 cores via nproc.

Imprint