Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/docutils/writers/__init__.py", line 140, in get_writer_class
module = __import__(writer_name, globals(), locals(), level=1)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/docutils/writers/pep_html/__init__.py", line 18, in <module>
from docutils.writers import html4css1
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/docutils/writers/html4css1/__init__.py", line 21, in <module>
from docutils.writers import _html_base
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/docutils/writers/_html_base.py", line 23, in <module>
import urllib.request, urllib.parse, urllib.error
File "/opt/python/3.7.1/lib/python3.7/urllib/request.py", line 84, in <module>
import base64
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/base64.py", line 11, in <module>
import binascii
ImportError: /home/travis/virtualenv/python3.7.1/lib/python3.7/lib-dynload/binascii.cpython-37m-x86_64-linux-gnu.so: failed to map segment from shared object
Looking around the Net, “failed to map segment from shared object” is usually accompanied by an additional message specifying the reason. Since in this case, it isn’t and due to below, this looks like an out of memory condition. Use make -j$(nproc) instead of make -j.
The full PEP repo build is an embarassingly parallel problem (there are hundreds of PEPs, with no build time dependencies between them), so we were inadvertently spinning up hundreds of instances of docutils at once.
(I also suspect it was the fact that I added 2 PEPs right after each other that allowed both PRs to build: individually, there was just enough RAM to do everything in parallel, but together they tipped things over the limit)
For folks following the thread: make -j2 is enough to use all the cores allocated to current Travis virtual environments, while make -j$(nproc) will start as many processes as exist on the underlying physical machine (currently 32).
That’s not a problem in our particular case (32 docutils instances will barely make a dent in the available RAM - it took hundreds to exhaust it), but could be an issue for more memory hungry build scenarios.