Snapping Nextcloud: The web server
The backbone of any web application is of course the web server, so that's where I started when snapping Nextcloud. I went with Apache as opposed to Nginx for two reasons: 1) Apache is recommended by Nextcloud, and 2) I'm much more familiar with Apache than Nginx.
Apache is in the Ubuntu archives, so it seemed that a reasonable starting point was to simply include it via stage-packages, something like this:
I quickly hit pain going this route, which leads me to:
Lesson 1: Debian packages only get you so far
Debian uses a different layout for Apache than the default. In particular, the method of determining which configuration, module, and/or site is enabled is not done directly in the main configuration file but via symbolic links. That makes it pretty easy to use, which I assume is the reason it was done, but the problem is this: those symlinks are setup in the Debian package's postinst, not distributed in the package itself. In simplistic terms, that essentially means the steps taken to install the package are the following:
- Unpack the package, placing things where they need to go.
- Run the postinst script, which calls familiar commands like a2enmod etc. to setup the default modules and so on.
In snapcraft, the stage-packages are only unpacked-- the postinst script never gets run. I'm sure there are various good reasons for this, but the obvious one is this: the stage-packages are unpacked into a build directory, not onto the system. Many postinst scripts have hard-coded paths which wouldn't work in this case. Regardless, the side effect is that, if I wanted to make use of Ubuntu's Apache package, I'd need to recreate all those symlinks myself. That, along with a few other reasons, led me toward compiling Apache from source instead. Where I immediately learned:
Lesson 2: Project build systems aren't really prepared for snappy
The design of snaps pushes the envelope in a few places, so it's understandable to see some incompatibilities. The one I'm talking about here is the concept of a relocatable build. Most projects typically use their build systems (autotools, cmake, etc.) to configure a project to install into location X, install the project from location X, and the project runs from location X. This works fine for Debian packaging, where it can unpack files into, say, /usr/bin/. However, snaps are confined to their own area in /snap/<snapname>/<version>/, so a build system that might be configured for installing into /usr/bin/ will actually end up running out of /snap/<snapname>/<version>/usr/bin/ (for example). It means the project is configured to install into location X, is installed into location X, and runs from location Y.
This isn't the fault of autotools or cmake, necessarily. It's perfectly possible to have a relocatable project that uses those tools, but projects historically haven't needed to bother with such things. A great example of this is Apache. It's an autotools project, and when you give it a prefix while configuring it, it writes that prefix everywhere. Config files, hard-coded in scripts, you name it. These paths have to be valid at build- and run-time. That's great if you're installing and running out of the same place... but with snaps, you aren't.
Thankfully, Apache is one of the most configurable projects I've ever seen. Everything can be set via a config file and/or command-line switches. I essentially had to build Apache in two phases: install Apache and its modules, then clean it up. I did this with a local snapcraft plugin, which gave me full control over the build process. I installed Apache and any extra modules that were requested, and then did one massive search/replace to enable it to be run from a snap. I could probably simplify this further by shipping a custom Apache configuration in another snapcraft part, but this works for now.
The next post in this series will discuss PHP.