Atomic Workstation
The idea of an "Atomic Workstation" is to use the ideas of "Project Atomic" to have a core operating system for a workstation that updates atomically as a whole, and then layer extra software on top of that. This is as opposed to the traditional model, where the operating system is dynamically composed on the end users system out of individual packages.
Advantages
The basic advantage of the atomic model is enhanced reliability -the
- Reliable and consistent upgrades between versions - F22 is the same as F21-upgraded
- Testing that is done for the project tests the actual operating system that is on users machines
- There is no possibility that an upgrade of the operating system runs into problems halfway through and leaves the system in a trashed state.
- Can rollback from unsuccessful updates, or if new operating system doesn't work with the user's apps
Currently, many problems with an unbootable Fedora system are bootloader or initrd issues; bootloader configuration issues are still a potential problem with the Atomic model. The ostree handling of /etc, which allows arbitrary modification by the user, also means that there is a gap between the goal of an unbreakable system and the reality.
Use cases
Pretty much anything that the normal Workstation is used for. The primary target of the Workstation is different varieties of developer, but the Workstation is also supposed to work for other users such as sysadmins, people who want to play games, or people who only want to use productivity applications.
Installing applications
Applications are installed via xdg-app. If we provide a "fedora runtime", we can rebuild Fedora RPMs into applications in a pretty transparent fashion. introduction update
Development
The primary target of Fedora Workstation is different types of developers, and currently developers often install things that don't fit well into the application model: they install daemons like web servers and databases to test apps they are developing locally. They install developer headers. They install modules for interpreted languages like Python or Ruby. And they install developer tools like gdb or valgrind.
Creating a local operating system by layering RPMs over a standard core ispossible but has some considerable downsides:
- It compromises the idea that we test the same core operating system as the user is running
- It compromises the idea that upgrades cannot break; while we retain the ability to rollback, it's possible that the core operating system plus a set of layered RPMs cannot properly be upgraded.
- Installing new packages will require a reboot into a new operating system, almost nobody will be happy with this.
Doing development in containers is one better way to handle these sorts of scenarios. Containers are great for testing: they allow installation of dependencies without conflicts, and also allow creating a container image that can be deployed *exactly as is*, without worrying about whether the deployment operating system is the same as the development system. Containers can also be used for compilation, although this is currently less common. Compiling in a container gives a very straightforward way to build binaries that are independent of the developer's system; that use a standard compiler and library versions. (gnome-continuous is an example of a system that uses containerization in this way.) Compiling against a standard SDK in a container makes even more sense when the build target is an application that will be run in a container.
When the user is using an IDE, it's the IDE's responsibility to make working with containers as transparent and convenient as possible.
The main, and strong, disadvantage of pushing development towards containers is one of consistency with the workflows that developers are used to, and with the documentation that is available out there. If someone finds a tutorial on the internet about how to develop with Django and mysql with Fedora, that tutorial isn't going to work at all if we are asking them to create a Docker image.
Perhaps it would be possible to use container-like technology to allow working within a Fedora-image that is separate from the main operating system, and where it is simply possible to dnf install
packages. But at that point, would it be better to just point people to using Vagrant, so that all the documentation and experience with people doing Linux development in an OS X or Windows desktop would carry over? This approach is likely to be controversial: how are we a better development environment for deployment on Linux if developers are working the same way they would on a different operating system.
Layering on arbitrary RPMS
https://github.com/projectatomic/rpm-ostree/pull/107/commits is a prototype of how layering packages of an rpm-ostree works - this seems to create a new tree locally with the specified packages layered on top, which is not necessarily useful for the developer usecase, since rebooting to install new devel headers or tools is not going to be attractive to users at all.