Nix – Death by a Thousand Cuts
I'm a NixOS user and contributor.
This post is fair.
Nix is very flexible, and it hasn't yet stabilised on a firm set of recommendations for a happy path.
Going on a whim:
* Use nixos-unstable. It's defacto stable, and gets much more attention than nixos-stable.
* Use flakes.
* Don't use multiple versions of nixpkgs. In the rare case a package is failing to build, then raise an issue, or wait, or rollback.
* On NixOS, don't use user profiles. They won't interop in the way OP hopes.
* Only use nixpkgs. If you absolutely must use another flake, only use popular ones from https://github.com/nix-community.
But until the community give opinionated suggestions, users will stray towards bad practices.
(Also, no need to mix pipewire and jack. Pipewire can emulate jack.)
I attended NixconfNA last year as part of SCaLE.
I spent quite a while trying to understand what Nix was actually trying to accomplish and how one would actually go about using it. Granted I was trying to do it on a Chromebook, but the idea stands: I should be able to get at least the nix environment set up and the silly gnu hello world built and running, right?
Turns out nah. The ergonomics are just that of a hiltless double bladed sword.
I'm glad I'm not the only one, but i also was aware of pushcx trying years ago and still failing [1] and he's a smart dude unlike me. I didn't feel so dumb.
Yes, the ergonomics are poor. I endure them for the results, but the ergonomics should be better.
Guix has better ergonomics, but it's own set of downsides.
I expect the underlying idea of holistic declarative systems is sound, and we're awaiting a polished alternative. Maybe it'll reuse nixpkgs under the hood , but replace the name, the tooling, and the language exposed to the user.
I love NixOS, it's my daily driver on my personal laptop, but it definitely has given me more than its fair share of headaches.
If everything you're going to do is in Nixpkgs, great! Nix will mostly "Just Work" and you'll get all the nice declarative goodness that you want. Since Nixpkgs is constantly getting updated, this isn't that weird of a thing.
The thing that's been most annoying to me is when I try and run generic Linux programs, only to be unceremoniously told "You can't run generic Linux programs in NixOS because we break dynamic linking". Suddenly something that would take about ten seconds on Ubuntu involves me, at the very least, making a Flake that has an FHS environment, or me making a package so that no one else has to deal with this crap [1]. I didn't really want to know how to make my own Nix package, and I don't really want to be stuck maintaining one now, but this is just part of Nix.
This means that it's still not something I could easily recommend to someone non-technical like my parents, unlike Ubuntu. You have to be willing and able to occasionally hack up some code if you want your system to be consistently useful.
To be clear, there's a lot of stuff I really like, I don't plan on removing it from my laptop, and for something like a server (where the audience is sort of technical by design), I really have no desire to ever use anything but NixOS, but it's a little less impressive for desktop.
You can run generic Linux stuff if you install nix-ld¹, the only tricky bit is having to customize the set of libraries given to nix-ld for your use-case. It includes various common libraries by default, but depending on what you want to run you may have to add to it.
¹https://search.nixos.org/options?channel=unstable&show=progr...
Interesting, I didn't realize that that was an option.
I've been getting by with buildFHSenv and Flakes, which, despite my complaints, really isn't that annoying. My goal at this point is to eventually compile all my flakes and take on Lutris.
Interesting, I didn't realize that that was an option.
As a fellow daily driver of NixOS, you’ve just summed up my biggest problem with it. You can do almost anything, and once you’ve figured out how and why you do it a certain way, that way often makes a lot of sense and the NixOS design can offer significant benefits compared to more traditional distros without much downside. But NixOS is out of the mainstream and its documentation is often less than ideal, so there is a steep learning curve that is relatively difficult to climb compared to a more conventional distro like Ubuntu.
The shared object problem in particular comes up so often, particularly if you use applications or programming languages with their own ecosystem and package management, that I feel like having nix-ld installed and activated by default with a selection of the most useful SOs available out of the box would be a significant benefit to new users. Or if including it in a default installation is a step too far, many users would probably benefit from some HOWTO-style documentation so they can learn early on that nix-ld exists, how it helps with software that was built for “typical” Linux distros, why you don’t need or want it when running software that was already built for NixOS such as the contents of nixpkgs, and how to work out which shared objects an application needs and how to find and install them for use with nix-ld.
I haven’t yet felt confident enough in my own NixOS knowledge to contribute something like that, but one nice thing about the NixOS community is that there are some genuine experts around who often pop up in these discussions to help the rest of us. I wonder if there’s scope for sponsoring some kind of “Big NixOS Documentation Project™” to fund a few of those experts to close some of those documentation gaps for the benefit of the whole community…
I would probably still use buildFHSenv if I was trying to package up a third-party binary for installation in my configuration. My usage of nix-ld is actually to solve the problem of using VSCode Remote to connect to my NixOS machine, and in particular to allow it to run binaries it downloads onto the machine for extensions (typically LSP servers).
There’s also nix-alien which does this but tries to be more automagical.
nix-alien is an older, worse approach that is not that well maintained
Just use Nix/Home Manager on Ubuntu or something instead of NixOS. You get, by far, most of the reproducibility and none of the NixOS issues. NixOS feels more like a great server environment, but not that good of a DE.
I don't think I agree with that. You don't get the system snapshotting, and you can't make your root filesystem tmpfs if you just use Ubuntu + Home Manager.
Love NixOS and Nix in general (just not the language). I've started using `steam-run` to run things I'm too smooth brain to port.
I will say that once I found out about Flakes it bothered me a lot less. I find them a lot easier to test and it's nice to be able to easily define a custom little environment for them and then just do `nix run` afterwards.
It's been especially useful to be able to specify exact versions of wine and winetricks installs on a per-game basis.
That will likely soon stop working because steam-run is no longer a grab bag for literally every library out there.
You can just use nix-ld to run anything that is somewhat closely resembled.
Could you setup distrobox to run regular Linux programs?
Yeah, that's exactly what I did for a while, but really once you get the hang of nix it's kind of unnecessary. I keep this bit of nix to hand for anything that I need to run
#!/usr/bin/env nix-shell
{ pkgs ? import <nixpkgs> { } }:
(
let base = pkgs.appimageTools.defaultFhsEnvArgs; in
pkgs.buildFHSUserEnv (base // {
name = "FHS";
targetPkgs = pkgs: (with pkgs; [
/* add additional packages here e.g */
pcre
tzdata
]);
runScript = "bash";
extraOutputsToInstall = [ "dev" ];
})
).env
Running `nix-shell` will drop you into a bash shell that looks just like a normal linux distribution with a lot of common libraries (thanks to `pkgs.appimageTools.defaultFhsEnvArgs`) and after trying to run your application you can shove whatever you need in the extra packages when it complains about a dependency being missing.Obviously it's a bit more work than other distros, but once nix gets it's claws into you, you'll find it hard to go back to old ways.
Almost certainly, though I've never tried.
I'm technical enough to where making a Flake doesn't really bother me, and it's really not as hard as I was making it out if you're already familiar with functional programming, I'm just saying it's an annoyance.
That said, I might need to play with Distrobox, it looks like it's in nixpkgs.
I've been on the fence about Nix. I've wanted to love it (and do love the concept), but between the Waiting-for-Godot situation for flakes, the weird language, and the occasional political infighting I've seen pop up about the community, I still haven't switched.
I'm no language expert, but I genuinely don't understand why it wouldn't have been better to build some equivalent DSL in Haskell to do this given the similar lazy nature of the language. DSL for most things, then open the hood and do actual Haskell for crazier use cases. I get that Nix started before Haskell became less academic and slightly more usable in the mainstream and has built up momentum, but the lack of tooling for understanding what is going wrong when incrementally building up a config is very confusing.
I'd be curious if anyone has go to or from NixOS compared to declarative distros compared to the atomic distros like ublue [0] and has any thoughts. I'm a bit split about what to move to next (though my >5 year Tumbleweed install on most of my machines is holding up no problem).
I switched away from Nix OS and eventually landed on GNU Guix, which I have stayed on for about 4 years now. One of the main reasons I switched away from Nix was because of the language, and how underdocumented it all felt. GNU Guix was a breath of fresh air, using a language with decades of academic backing outside of the context of Guix (SICP was awesome for getting into it) and the whole system is very well documented, with a nearly Arch-wiki quality manual built into the OS in the info pages.
Oh, I'm interested. Are you using it on servers, or desktop? My concern is the community is small, while Nix's has been booming.
I am using it on my home server that serves my web page and also a lot of things for my home network.
It runs some guix containers and some VMs. Nothing fancy.
All declared in a couple of files.
4 years can be a very long time in a project, especially when the "network effect" hit around that time, where the active user count (and contributions) grown significantly.
Also, the language is quite simple, it's just foreign and you felt more at home with Scheme, so you might not have given Nix as much of a chance. This is the classic "simple vs easy" from the Hickey talk.
Documentation is no perfect, but has become quite a bit better over the years, and many of the problems that still linger are simply architectural ones of the nixpkgs repo, irrespective of language and wouldn't be solved in any other language/DSL in itself.
Guix has stripped away the biggest plus from NixOS: the module system and replaced it with a half assed system
In all those years working on and playing with free software, I still cannot understand the incessant need for badmouthing other projects and calling things "half-assed". What a destructive habit!
I mean, modules are just regular guile modules. It feels somewhat clunky, but at the same time you can use guile's introspection to do fun stuff.
I always found it more flexible, but on the other hand I never liked NixOS modules.
Yep, it feels somewhat clunky when you are used to NixOS modules :P
Can you say why you think nix modules are the "biggest plus" from NixOS? They don't even make the top 5 for me.
When installing Nextcloud I basically have the following 4 options: - Do everything by hand and read through the docs on every update. Does not sound like fun. - Use someones Ansible playbook and hope that they update it on time. meh, also customizing it is not a walk in the park and requires some effort on my side. - Use the upstream Docker container which has the same customization problem as Ansible - Use the NixOS module. Updates are fast. Configuration changes are being handled by NixOS and I can easily inject a nginx location block in my declarative config. I also can easily describe extra bits like pre-compressed assets which then are served by nginx in my normal workflow without having to think about them at all on updates.
overlays and the module system are THE killer features. Almost no one else has something comparable to offer and if those powerful features are well understood, they can save you soo much hassle.
> I'm no language expert, but I genuinely don't understand why it wouldn't have been better to build some equivalent DSL in Haskell to do this given the similar lazy nature of the language.
My impression is that you can't really build nix as a DSL in haskell, because the core insight of nix is to introduce the "derivation" function into a pure programming language, whose behaviour is pure (the output is determined by only the inputs), but whose implementation is very much not (it builds packages from a specification).
There may well be a work-around for that (it's been a while since i haskelled), but it's likely to end up with a result that's less clean than it would ideally be.
Personally I find the nix language to be a pretty good match for the tasks it is used for (though some basic static typing would be nice).
From the outside, i can see why it looks odd, but from the inside, there's not much of a desire to switch to something better, because the language isn't the thing that gives people trouble after the initial learning period (which would exist with any host language).
My impression is that you can't really build nix as a DSL in haskell, because the core insight of nix is to introduce the "derivation" function into a pure programming language, whose behaviour is pure (the output is determined by only the inputs), but whose implementation is very much not (it builds packages from a specification).
Evaluation is completely pure (at least with flakes, which disallows querying environment variables, etc.). Evaluation of derivations will result in .drv files in the store, but that does not add impurity to the language itself. Building the .drv is a separate step (instantiation).
You could totally write something that generates .drv files in a different language and use Nix for instantiation (building). If I am not mistaken, this is how Guix started - they evaluated derivations defined in scheme to .drv files and then let the Nix daemon build them.
Aside from that, as a Nix user, I am happy that Haskell is not the language. Nix is a very small, simple language that is easy to wrap your head around and does not lead to a lot of abstractionitis. A want to say this in a way without painting a caricature, but the Haskell community has a tendency to pile on a lot of abstractions and I would hate to see a Nix with monad transformers, lenses, or whatever is popular these days.
> Evaluation is completely pure (at least with flakes, which disallows querying environment variables, etc.). Evaluation of derivations will result in .drv files in the store, but that does not add impurity to the language itself. Building the .drv is a separate step (instantiation).
If import-from-derivation is enabled (it normally is, it's a very useful feature, and the foundation of flakes), then some derivations need to be built to complete the evaluation.
https://nix.dev/manual/nix/2.25/language/import-from-derivat...
Even then functions like "readFile" are considered to be pure in nix, but not in haskell.
> If I am not mistaken, this is how Guix started - they evaluated derivations defined in scheme to .drv files and then let the Nix daemon build them
IIRC it still works that way; there's no real reason to change. Scheme isn't purely functional though (and the guix programming model is clearly imperative), so it doesn't have this mismatch.
If import-from-derivation is enabled
I have never looked at the implementation of IFD, but I assume that the evaluation and instantiation are still separated (and Nix will do multiple passes).
Even then functions like "readFile" are considered to be pure in nix, but not in haskell.
I am pretty sure that, unless you use --impure, all files that are read are required to be in the store. Since the store is read-only, it does not break purity.
At any rate, I agree that there will be some hoops to jump through. But I think it would be possible to make a Haskell DSL to define derivations similar to Nix. But I don't know why one would want to.
> I am pretty sure that, unless you use --impure, all files that are read are required to be in the store. Since the store is read-only, it does not break purity.
Right, but even then the logical type for readFile would be something like "string -> string" (because from nix's perspective it is pure), but in haskell it would have to be "string -> IO string" (because from haskell's perspective it is not).
Maybe this is fine, i just suspect it would make things messier than expected.
Alternatively this could be worked around using unsafePerformIO or the FFI, but that feels a bit far away from the idea of just making a DSL? Unclear...
> But I don't know why one would want to.
Same, I just think it's an interesting discussion.
I don’t understand—the language itself is completely contained and separate from the derivation. Evaluation could be done in any language and the derivation will remain the output. You can absolutely have a better language generate derivations, surely? Hell, you could use Python typescript or go if you wanted to. They’d even be completely compatible with the unholy mess of cursed bash that is stdenv.
What you can’t port over to another language as neatly are the modules. Good riddance, id say. Undebuggable spaghetti from hell.
> from the inside, there's not much of a desire to switch to something better, because the language isn't the thing that gives people trouble after the initial learning period (which would exist with any host language).
Unfortunately I have wasted enough of my life to call myself “on the inside” and IMHO the language itself is close to the number one threat to wider adoption of nix.
> You can absolutely have a better language generate derivations, surely?
Yes, hence guix. The issue is that it doesn't fit well into a pure functional language like haskell if you want to allow import-from-derivation or basic functions like "readFile", without putting everything in IO (complicating the DSL).
https://nix.dev/manual/nix/2.25/language/import-from-derivat...
What you can’t port over to another language as neatly are the modules. Good riddance, id say. Undebuggable spaghetti from hell.
Not that it matters, but why not? Modules are written in the pure-functional bit of nix, so could be expressed in practically any language.
>> from the inside, there's not much of a desire to switch to something better, because the language isn't the thing that gives people trouble after the initial learning period (which would exist with any host language).
> Unfortunately I have wasted enough of my life to call myself “on the inside” and IMHO the language itself is close to the number one threat to wider adoption of nix.
I guess different people have different experiences. This was mainly based on my personal experience, but if you look through the help section on discourse, the questions are not about the language (at the time of writing i didn’t find even one in the first few pages):
https://discourse.nixos.org/c/learn/9
There also just doesn't seen to be a big push in the nix community to replace the language. Nickel exists, but i don't see the push for that from the nix side.
> but if you look through the help section on discourse, the questions are not about the language
Mostly because people don't know how to ask questions about the language. That was my experience.
Over the past decade I've made a few forays into Nix and NixOS (I still need to revert one of my servers back to Debian from NixOS). Inevitably I find the language obtuse, and the help online is always in the form of code fragments whose purpose kinda sorta looks alright maybe, but doesn't ever seem to fit into the setup I've built. So then I'm faced with completely rearranging the structure to match the helpful code, or try to massage the helpful code into my structure (which may or may not be a monstrosity, nor could I explain what every one of the magical incantations are for). Rinse and repeat with the next problem.
So it becomes essentially impossible to ask questions about it because I don't actually know where one thing ends and another begins.
After awhile, I start asking myself "Why was this a worthwhile venture again?"
I've heard good things about guix though. Might give that a try next. I'm done with Nix - burned one time too many.
> the help online is always in the form of code fragments whose purpose kinda sorta looks alright maybe, but doesn't ever seem to fit into the setup I've built. So then I'm faced with completely rearranging the structure to match the helpful code, or try to massage the helpful code into my structure (which may or may not be a monstrosity, nor could I explain what every one of the magical incantations are for)
None of these are language problems, they are problems with the way nixpkgs is structured and the ways nix is used (or your understanding of those).
Perhaps being purely functional causes some of this complexity/unfamiliarity, but in that case replacing it with another pure functional language (the original point i was replying to) is not going to help. Maybe replacing it with scheme (functional/imperative) does, I don't know.
The semantics of the language make it virtually impossible to write a good lsp with “go to definition”, particularly when you get into the module system (which is where you need it most). This is a massive barrier to entry and the only way to solve it is to spend a lot of time imbibing nix lore. It’s a fundamentally unsustainable language design for adoption as an incidental build tool and it’s no surprise that those who manage to persist are also the ones who spend a lot of time with it. Unlike a more ergonomic language with good tooling and semantics which encourage adoption by casual users.
Discourse is massive selection bias. The people who make it that far are not an accurate representation of the potential nix users.
I'm using Universal Blue now (Aurora, i.e. KDE flavour) and I'm very happy with it. With its large amount of pre-installed packages and drivers (including proprietary ones), I still didn't need to install any custom package (rpm-ostree) or otherwise modify the OS config (except for turning off SELinux in /etc/sysconfig/selinux). It's the most pragmatic distro I've used so far.
SaveDesktop[0] (saves flatpak apps and DE configs) and mise-en-place[1] (declarative shell environment manager) are making my installation backupable and quite reproducible (not to NixOS standards though).
For software that's not in flatpak, docker or mise, toolbox[2] and distrobox[3] are available for the rescue. Both work really well (toolbox seems better for CLIs, distrobox for GUIs), but all atomicity/declarativity is lost.
[0] https://github.com/vikdevelop/SaveDesktop
Docker and flatpak suck so much if you want to customize anything
I use NixOS as my daily driver. I concur. I wouldn't recommend it for most people (even for me, when I decided to give it a try). I'd probably just go Arch if I were to do it over again.
The concept behind Nix/NixOS is amazing, but it needs to be polished. Flakes are the future, but they are languishing in this experimental status. Even simple things like installing packages from stable and unstable channels are too hard to figure out. The documentation is terse and the community answers are often not enlightening.
A big complaint of mine is that the builds should be reproducible, but I find I sometimes need to run `nixos-rebuild switch` several times to get a successful build. The error messages mysteriously resolve themselves. For me, this doesn't pass the bar for being considered reproducible.
Don't get me started on using an NVIDIA graphics card also. Granted, part of my difficulties is that I was running Wayland, which doesn't have the best NVIDIA support, but I felt like I was just doing an exhaustive search through the potential config settings to see what worked. Ultimately, I found just the right combination of settings to get everything working buttery smooth. I ripped out the NVIDIA card and put an AMD card in.
Installing packages from different channels is still far easier than on any other distro. Try getting a Debian 10 package to work on Debian 13. You can't. GUI programs are hard because how GUIs work on Linux. You cannot make them easily pure, they always rely on the booted system through drivers and a bunch of impure things all over the place.
If the software you are using has race conditions in its build system then there is only so much you can do to fix that. You could for example run nothing in parallel with only one core but then everything would be painfully slow. Also the occasional network hickup breaks things. Lately also io_uring in combination with nodejs has been a great source for kernel bugs. You can only bang the software so much from the outside.
Nvidia is bad because Nvidia is bad but at least switching between different driver versions and variants is possible without leaving a trace of old things behind on your system like on literally any other distro.
After spending some time on NixOS I basically decided to hold off until flakes become official and the docs are written with them in mind. In the mean time I just run Arch with Nix home-manager and I'm happy.
I've developed enough good habits over the years that I don't get breakages, and home-manager allows me to easily sync my dotfiles across machines..
What makes you reluctant from using flakes? I initially thought I'd never have a need for flakes but after spending an hour on YouTube and Googling, I converted to flakes.
Sorry I was unclear. I’m using flakes, but I don’t want to commit to using NixOS proper until flakes actually get blessed as ready for prod. And yes I know everyone says they are, but I won’t be convinced until the experimental flag comes off.
If you are using more than 2 channels than flakes are the natural progression.
My experience has been in complete agreement with yours: I love the theory, but the practice is so, so painful.
And yes, I also had to settle for your NVIDIA fix. I suspect I would have had a marginally better time on Arch as there are more people beating their heads against it and documenting how they made it work. NixOS documentation is piss-poor in comparison.
Most people don't realize that you can read the arch wiki and put the same settings into the nixos options. Where is the point in replicating that all again?
I don't know how to adapt those settings to the corresponding module - there are often differences in naming and hierarchy conventions - and there are other NixOS-specific considerations with regards to its shared-nothing architecture.
While it is technically possible to adapt the information in the Arch wiki to NixOS, you need a strong understanding of the software, how it was packaged for NixOS, and NixOS itself to do it effectively. Once you do figure it out, it's pretty straightforward, but that can be hours as opposed to minutes with Arch.
Modern modules often have a settings option which directly links to the upstream documentation and you basically write the options from upstream into the settings option.
I look all the time in all kind of sources and skip all the tedious and annoying installation steps and just look at the described configs.
My one big question about nix is how the hell do I find out those options? Like cool, I know I need to set the config to some specific value based on arch wiki, but how do I read the nix package to find out what config "key" to use? I've never been able to work out where these are defined
I look at the Nix definition for the relevant module as you often need to see what it's actually doing in order to understand it (yes, it's one of those ecosystems).
For example, for the `steam` program (not package - the package is a dummy): https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/p... and then look for "lib.mk".
The key terms don’t change so I usually just grep Nixpkgs in nvim and usually have a lead on where to start. Obviously it’s a bit more work than copy/paste from the arch wiki but generally more popular config changes will have an nixos option available.
Or https://mynixos.com/ which is hierarchically navigable. I usually search for a package or setting name and browse around.
I'm on an NVIDIA Jetson, so I guess I'll just have to wait before this stuff becomes practically usable for me ...
Nix for me has been a great source of stability. I used to run ubuntu and was never happy. Packages randomly broke, the UI lagged a lot, I always had to dig to get things working. One day when I head a uni deadline an automated updated destroyed my wifi funcionality. I had some experience with nix from work so in anger I installed NixOS. Wifi worked and I finished my uni assignment. Haven't installed anything else on my computers since, and that was 6 years ago. Sure things can be a pain. But NixOS has never broken in unexpected ways. I know if I update things may go wrong. But I can always go back and try again a newer version a few weeks later.
The biggest drawback is really that "random executable from the internet" does not work out of the box. And sometimes you have to spend a lot of time to package something yourself. But all in all It has saved me time and a lot of pain. I dare even say I no longer have a toxic relationship with my OS.
For those pesky random executables there's a couple of escape hatches -- buildFHSenv and nix-ld. This is also predicated on good provenance of the executables in question. One should probably not even ldd sketchy binaries:
Even proper packaging is far easier compared to other package managers. Typical distros push users away from packaging their own software, so users end up relying on ad-hoc solutions instead. Nix instead makes packaging easier by having proper tools to abstract away the nitty gritty details.
For random binaries, autoPatchelfHook works miracles.
It wasn’t that bad creating some new derivations my first week with Nixos, I was so used to Arch where I had maybe a handful modified pkgbuilds over a decade.
For better or worse it was a positive experience, especially when you usually already have a pkgbuild to go off of.
Every time I see a linux installation with a mess in /opt because it's faster than making a package, I get annoyed.
steam-run seems to be able to run everything. It uses bubble wrap to keep the OS isolated and add /usr/bin stuff most exes want.
*it won't be in the future because it is no longer the grabbag for everything.
Also linking things to /usr/bin is done by the fhs which uses bubblewrap, not steam-run.
I now use distrobox to run random binaries in a container. It's faster and convenient
> just run random binaries from the internet like it's 1998, bro
That world was fun but I don't want to go back to that place.
I use NixOS, one of the annoying things to me is the documentation and error reports.
I swapped my installation to a Flake managed install a few months ago, and parts of my Nix files that were perfectly fine before started throwing out errors (specifically HomeManager), which no amount of Googling the error message that gone thrown got me any closer to a solution.
I looked at documentation recently to try and enable PGO/LTO and Zen 3 optimizations (don't mind compiling everything) and I think I saw at least 10 ways and none worked (gcc errors, etc).
This is why I haven't switched my NixOS to flakes yet. The community discussions always act as though flakes should be the default that everyone should use now, but I figure that the developers know what they're doing and haven't made them the blessed path yet for a reason. So far so good—my system is far more stable than it was under Debian and I've yet to run into anything that didn't have an easy answer.
I have a suspicion that because the Nix community is disproportionately likely to contain early adopters, the general mood in the forums is less risk-averse than I am with my primary stacks.
> I figure that the developers know what they're doing and haven't made them the blessed path yet for a reason.
My take is: flakes don’t align with centralised nixpkgs and ultimately don’t solve any problems that can’t be solved without flakes.
They’re just an interface for a decentralised module system. You can use them, they’re feature-complete, and they don’t align with nixpkgs: it doesn’t make sense for individual packages to have their own flakes, nixpkgs can already be loaded as a flake.
FlakeHub tries to popularise flakes, but I don’t know if there is a flake discovery problem to solve.
Ekala Project is designing a poly-repo alternative to nixpkgs (ekapkgs) and they don’t embrace flakes.
So... Flakes have reached full maturity: a decentralised package format that has stalled its adoption status within the main Nix toolchain.
> My take is: flakes don’t align with centralised nixpkgs and ultimately don’t solve any problems that can’t be solved without flakes.
I've turned to flakes to specifically solve some problems, and flaked solved them. To this day, flakes are still the only way to solve them.
Flakes aren't default due to political reasons.
Yeah, I am flabbergasted that anyone can claim flakes don't solve problems. And yet, every SINGLE WEEK some newcomer gets tripped up on channels, managing them, realizing the root's channels are different than users, realizing their channels are out of sync on their multiple machines, no posting their channel revision when they solicit help. Not to mention pure eval. Not to mention transitive dependency overriding.
> I am flabbergasted that anyone can claim flakes don't solve problems
Yes, that would be an outrageous claim! That is, of course, not what I said.
Arguing that channels lead to more problems than flakes is a good argument in favour of adoption of flakes. But you can also abandon channels without adopting flakes.
Which is what I said: flakes don’t solve any problems that can’t be solved without flakes.
Which is why I mentioned transitive dependency management, and pure eval. Both of which are absolutely not solved by npins, etc.
I mean, nix isn't solving any problems that can't be solved without. This can be said about nearly anything in your universe.
The point is: flakes are solving issues now in nix, and nothing else _right now_ can solve them in nix. I'm using flakes because they are currently the best path forward. Provide an alternative path that is better, and I will switch.
> I've turned to flakes to specifically solve some problems, and flaked solved them. To this day, flakes are still the only way to solve them.
Can you share some examples of such problems?
1) Channels are hard to maintain (that's why overlays were introduced...)
2) Overlays only solve issue of adding your own packages to an existing channel
3) System channels and user channels are two different things.
4) Many times I've updated my home-manager profile and forgot to update system profile and it borked due to channels being out of sync (user error, but flakes remove that foot gun)
5) Very easy to have portable dev-env. If a system has nix installed, just typing `nix develop` in my repo would put you in the exactly same dev environment as me. In most cases it would byte for byte identical. I'm not going to tell you to install 100 of dependencies, not going to bother you with what application is written in, all you have to do to build it locally is to type `nix build .#`. I'm not even going to bother you how to run test because `nix check` will run them.
6) Flakes provide some schema, you know here nixos or home-manager modules would be.
7) Flakes are easy to compose together
8) I can have identical env on CI, production and my local machine without any extra overhead - flake.lock takes care of this.
All of this is extremely predictable: I got a new laptop, using nix-anywhere I've installed nixos on it, that had pretty much identical look and feel of my desktop. It all boils down to - channels suck and hard to use.
With flakes I can get other people to run exactly the stuff I packaged with one command e.g. `nix run github:...` which can also be a reference to specific commit BTW.
At this point I don't even quite remember what would be the sane alternative without flakes but I am happy to discuss...
I can say what I use flakes for at work:
I have a repository with system configurations for some CI infrastructure: a build server, a test runner.
The test runner can either be generated as an SD-card image using nixos-generators or live-updated using a remote `nixos-rebuild switch`. The OS configuration contains stuff about purposing specific motherboard GPIO pins.
Both systems depend on custom software not in nixpkgs; these are just hosted in private git and have a flake that mainly provide a devShell, but also provide a cross-compiled binary.
Flakes handle all of that in a predictable way: OS images, cross-compiled binaries, devShells, cross-repo linking, convenient hash-pinning.
Flakes bring you one interface to share common dependencies which is kot possible without an interface.
That interface could have been built in vanilla nix, though.
Instead, a bunch of very useful features are bundled with flakes, like pure eval, eval caching, and git-awareness. But flakes still have some showstopping usability issues preventing users from benefiting from these great features. Issues like that it copies your repo root to the nix store on every evaluation, which scales terribly to bigger repos. Not to mention other issues like the extremely limited ability to override a flakes inputs - you can't pass a configured instance of nixpkgs for example.
Generally speaking, the more advanced the use case, the more likely it is that flakes won't work well. Which probably helps explain why flakes are still unstable after so many years.
Everyone has some opinion. There are the people that say that flakes do to much and others are saying they do to little. No matter in which direction you go, someone is always unhappy.
Eval caching depends on pure eval and pure eval was previously not possible because channels are by design the most impure and cursed thing and are just a bandaid that lasted way to long.
The scaling with big repos gets worse as some use flakes instead of their normal build system which is not the intended use case. You still do development like normal with the normal build system.
Flakes are still unstable because everyone wants something different and there are still breaking changes planned which then everyone would wine about and maintaining backwards compatibility is a pain if you still change fundamental things.
> Flakes are still unstable because everyone wants something
Yeah, that is definitely true (as evidenced my complaints). I firmly believe the original sin of flakes was how many things it bundled together.
Yes, they bring an interface.
And lockfiles that are automatically maintained where digests are extracted into.
Unlike if you do builtins.fetch* and pin those, in which case the digests end up in your source code.
I'm glad I'm not the only one who's grinding their gears with flakes.
I decided to migrate to flakes because a lot of the documentation for things I wanted to do with NixOS required flakes. It took me at least a few hours to understand what the purpose of them was though.
I just wish the documentation would be improved really, as an example https://nixos.wiki/wiki/Build_flags#Building_the_whole_syste... no longer works (gcc complains).
Be the change you want to see and help documenting the missing parts.
Also that is the old wiki but the same stays in the new one.
Worth noting that ChatGPT et al. Are equally useless for debugging Nix. Frustrating that it’s so far behind. Error messages are often cryptic and misleading.
Yep, I thought ChatGPT was trained on Github but it's generated precisely 0 correct .nix files for me to date.
I suspect Sturgeons law is at work here. "90% of everything is crap"
I can believe that especially with nix - there isn't a defined way to do things. Most flakes out there will be done by newcomers learning and flakes is relative new e.g. 2020. The documentation of nix is bad.
SO ChatGPT3 had very little good stuff to learn from and later won't have much better.
A lot of the documentation includes code snippets but with no detail or context on where those snippets should be, so ChatGPT often puts things in the wrong place.
Are you complaining that AI is not the savior of everything?
I love the ideas behind Nix. But as noted here, there's a thousand cuts to be found.
My biggest issue has been packaging binary distributed programs. These often want files in a particular directory somewhere, often want to find relative path libraries or plugins, want certain configuration options in etc...
None of that Just Works, there's a whole confusing method to try and monkey patch the software to work but its long list of not being able to find the information you want, not being able to do what you want, or simply limitations around how nix wants to structure things that make it really really frustrating.
If something like Nix were to be done again... I'd really recommend starting with something like a strongly typed flake like language with tooling a lot closer to that of cargo from rust from the get go. Errors should be easy, projects should be easily setup independently, etc. Where every project can simply be built as an independent thing. Sure there's downsides, but the upsides are that... you don't have the impossible task of managing one of the largest mono repos, if not the largest, on github. With all of the insane issues that entails. It wouldn't be that terrible to have a crates.io equivalent to publish, test, and share flakes.
Now I think I might've just created flakehub... but flakehub still relies on nix the tool and nix the language which are far from easy to work with.
That's just proprietary software assuming you are on something Debian or red hat like. The problem is on them being closed and hostile towards improvement.
Also for someone not knowing Rust it is also very intimidating and if you start to go into more complex things you are easily out of luck with the tutorials out there.
The monorepo is not that big of an issue. More often you are being bitten by badly maintained software that doesn't work with a 3 year old compiler or upstream is unwilling to move forward because of LTS support or something.
> ZFS on Linux [...] The recommended way to do this is to use LUKS, not native ZFS encryption.
FWIW I've been using native zfs encryption on nixos and it works great. It lacks neat features like being TPM-backed or having multiple keys, but if all you need is password-based encryption then I think native ZFS encryption is better since you'll be able to do encrypted zfs send/recv, you'll have granular control over which datasets are encrypted (or encrypted with different passwords), you'd get cross-platform support for the encryption (for example, my FreeBSD home server can receive and decrypt my laptop backups), and you aren't adding another layer of complexity.
I think the main reason ZFS's native encryption isn't recommended is that there's known bugs in its implementation, especially around key rotation and send/recv.
> I think the main reason ZFS's native encryption isn't recommended is that there's known bugs in its implementation, especially around key rotation and send/recv.
Is that still the case? I thought the send/recv bugs at least were squashed a couple years ago?
I'm relatively new to nix, and this cut close:
> At this point NixOS has been around for 2 decades, but it still feels like it has not settled on good recommended workflows for incoming users.
Yes. This was a major pain point when I was getting started. The IRC community has been helpful in this regard. I also really don't like that nixpkgs serves as both a lib and a package set. Be one! I don't want "special" inputs in my config.
> good recommended workflows for incoming users
Users of what exactly?
Workflows for configuring a desktop to play Steam games is vastly different from workflows for managing a cattle fleet of enterprisey servers.
On how to assemble your config. Every config I open does things a little bit different, from the get-go, when using flakes. Add package sets as overlays to nixpkgs or pass inputs downstream? How to parametrize "system"? What's stuff like flake-parts and flake-utils for? Should I use them? All these came to my mind on the first day.
It's not really a "config", it's actually a program that plugins into your infrastructure-as-code process to build system images. As expected, people here love to bikeshed and have vastly different opinions on "best practices".
I understand that, but even nixos refers to configuration.nix as a "configuration file" in its documentation.
IMO, it's totally OK that people have different opinions on what are best practices. However, I would still like to see official documentation showing beginners how to do things, comparing a few options.
Trying to get Eduroam working soured me on NixOS as a desktop/laptop OS. If conventional methods fail, you're left with a completely non-standard OS designed to prevent quick hacks.
But NixOS spoiled my entire mindset around Linux. Going back to anything else feels like a massive downgrade. We would be better off today if declarative operating systems became the standard back when they could.
I ran NixOS while I attended university and don't remember any problems with this. Is it a NetworkManager issue?
Wi-Fi should just work like any other Linux distro, assuming you have a desktop environment like GNOME or Plasma installed.
Probably not very helpful telling him that it should work!
eduroam is not your everyday WPA{2,3}-PSK, it's WPA2-EAP. There are official shell scripts to provision certificates, but they only seem to work on major distros, and for some reason the eduroam website made different scripts for every university. Also, for most people this is their first (and last) experience with 802.1X, especially setting it up themselves.
In my experience few years ago, it was a pain to set it up on everything except macOS and iOS (which come with eduroam certificates preinstalled in their trust stores).
But then it would suck equally as much on any other Linux distro, NixOS has no relevance here.
(I have also suffered from tying to connect to eduroam on Linux laptops).
I assume OP tried to set it up in NixOS config file, instead of using some GUI (such as GNOME's nm-connection-editor).
eduroam is less one network standard implemented by universities, more like, individual university networks that are set up similarly enough that they have a chance of talking to each other's auth servers and maybe working.
I only wish Guix had a more robust nonfree packages I think it could really give Nix a run for its money.
If you're looking for robust, nonfree packages for Guix, you can find them here:
I know about nonguix. It's the only way the wifi drivers work on my laptop. But it doesn't have half the stuff Nix does. I need a nix service running in my system configuration to install Teams and Discord. That channel is practically unmaintained compared with Nix, I'd guess because it's not an official part of Guix.
Also, for a wider search for packages across non-official channels: https://toys.whereis.social/
Just use nix
> It seems very cool that you can roll back in the case of a catastrophic upgrade failure, but has that every happened to you? Not me.
Rollbacks saved me from completely destroying my entire system. I managed to fill up my boot partition in a way that deployed successfully but left the whole system unbootable after reboot, and the only way I managed to save it without having to completely wipe and reinstall from scratch (which means losing all my data) was to load the SD card onto my laptop, fix the boot partition by hand to ensure the kernel from the previous generation was valid, and edit the bootloader config to delete the offending configuration (because accidentally trying to boot it would re-corrupt the boot partition).
I've also used rollbacks in other less catastrophic situations, such as when I broke wireless (since I build remotely on a much more powerful machine and deploy over SSH).
I use NixOS on my home router and rollbacks also saved me from a Firewall misconfiguration that broke all network connectivity.
Nix has been my daily driver for about 10 years, all 10 on servers, and about 3/4 on a laptop.
The thing that hits close to home for me, is the inability to use software that doesn't support nixes opinions on how to do version management (for example a post of mine from years ago[^0]), software that likes mutable state for its configuration (Gnome for example) and yeah, trying new things that aren't packaged for nix means writing a nix derivation.
That said, I feel like nix does more good than harm for me so the paper cuts are bearable.
[0]: https://community.roonlabs.com/t/unable-to-get-roon-to-start...
i use nixos on VMs, my desktop (Gaming and productivity) and servers. I use flakes for everything.
I've painfully learned how to do everything I need. My only big complaint is updating systemd. I have yet to figure out the systemd update bug. Sometimes nixos-rebuild-switch takes my network offline when updating systemd. It's incredibly annoying to update a box and have it drop offline. My work around is to do a 'diff' and when systemd is updated, I reboot manually and only update the boot image.
Does it stay offline? My network often bounces when updating systemd, but I haven't seen it stay down.
I believe https://github.com/nixos/nixpkgs/pull/372196 fixes this if you are using systemd-networkd. It was merged to master last week and made it to unstable branches (https://nixpk.gs/pr-tracker.html?pr=372196).
Probably because it restarts the network service. You can configure the systemd unit to be only reloaded or nothing at all m
Yup, I have had that several times too.
Mine does that too, sometimes.
I found getting started quite easy.
But then you discover there are like 4-5 different ways to manage packages and not much consensus in the community on what the correct way is. That was kinda discouraging
A fairly clear hierarchy emerges with enough experience, I think, but I don't know if there's explicit consensus about it of the kind that could make its way into documentation. Here are the rules of thumb, though (in a kind of priority order):
0. If you're new and on the fence about using flakes, go ahead. (If you know you don't want them, fine.)
1. Prefer declarative installation to imperative installation.
2. If a module exists, prefer using it to configure a package to just adding that package to a list of installed packages.
3. 'Native' packages are better than 'alien' packages.
3a. Packaged for Nix is better than managed externally. (I.e., prefer that programs live in the Nix store rather than Flatpak or Homebrew.)
3b. Prefer packages built from source to packages carved out of foreign binaries.
4. Prefer to just rely on Nixpkgs for things that are already in Nixpkgs; only bother with other sources of Nix code (likely distributed as 'flakes') if you know you need them.
5. Prefer smaller installation scopes to larger installation scopes— when installing a package, go with the first of these that will work: per-session (i.e., ephemeral dev env) -> per-user -> system-wide).
6. Prefer Nixlang to not-Nixlang (YAML, JSON, TOML, whatever).
7. If you're not sure, go for it.
If you follow these guidelines you'll make reasonable choices and likely have a decent time. The most important rule is #1, so once you know your OS, your task is to make sure you have at least one module system available to you. (On NixOS, that's NixOS and optionally Home Manager. On other Linux, that's Home Manager. On macOS, that's Home Manager and/or Nix-Darwin.)
After that, everything can find its natural place according to the constraints above. If you need to break or relax a rule, it'll be obvious.
Inevitably you'll end up with things installed in a handful of ways and places, but you'll know why each thing belongs where it is, and you can leave yourself a note with a '#' character anywhere that you think a reminder might be useful. :)
I think there probably actually isn't one "correct" way.
Anything that is best configured with a nixos module should probably be in your system configuration, but beyond that there are probably a lot more than 5 different ways and they have their advantages and disadvantages.
What I settled on was a per-user declarative setup (first with "nix-env -r", now with "nix profile"). Then I use nix shells to run software for one-offs. If I find I am running the same software from a nix shell a lot, I toss it in my declarative file.
Plenty of people hate this setup, and do something completely different (e.g. imperative managing of profiles, or using home-manager, or a dozen direnv setups). I don't necessarily think any of them are wrong, but they are not for me for various reasons.
I love nix. I use nixos with flakes, syncthing and direnv. My directories are development environments. My projects are reproducible and portable to different architectures. I don't use the language specific package managers, I have one that can bind them all. Nothing is lost in a tangled mess of imperative configuration choices. My file system is clean and organized. Everything is how I told it to be. I am happy.
My journey with NixOS is as follows: 1) great and useful for development work. (I use NixOS inside WSL) 2) but for your general Desktop environment, I'd say it's only great and useful if you find "tooling" fun/as a hobby (i.e. the type of person who keeps their dot-files updated in a git-repo [although, you won't need dot-files anymore :)]
I would say the biggest negative is that it seems the development of it is disjointed, as there are almost too many ways to do the same thing; some things are being deprecated before the documentation even keeps up with development.
--- personal notes: --- Also, some things are finicky and require some understanding to get to work (e.g. getting VSCode (on Windows) working with language analyzers and code sitting inside a WSL NixOS distro)
- I love Flakes but don't really love home-manager; I can understand home-manager being useful if your using Nix and not NixOS. - My NixOS rules pretty simple: 1) A per-project flake.nix + direnv file (or an env playground) 2) Configuring "etc/nixos/configuration.nix" for global tools like "wget, git, etc." (don't get me started with programs.<program_name>.enabled, see above for "too many ways to do the same thing")
> don't get me started with programs.<program_name>.enabled, see above for "too many ways to do the same thing"
programs.<program_name>.enabled implies that there is some system configuration needed in order for the program to function (or in home manager, the analog would be user configuration in your home directory).
Whereas environment.systemPackages simply puts bins on your path.
I think this is quite a fair commentry (although I quite like the Nix language personally) - as a nixpkgs developer even I don't use NixOS on the desktop. For me it shines on servers and development environments.
> as a nixpkgs developer even I don't use NixOS on the desktop
that's not every encouraging :)
To counterpoint this, I'm an happy nixos desktop user. It's not perfect, but still vastly better than a non declarative distro for my taste.
Wholeheartedly agree.
NixOS gave me back my desire to customise my Linux again. I’ve run Linux since 1997; I’ve run a lot of distros.
Having to reconfigure my Linux on every hardware reset (1-2 years apart) just exhausted me to a point where I ran GNOME on Ubuntu so I wouldn’t waste time on one-off stuff.
My .emacs and .vimrc shrunk to 10% so I could reproduce them from memory if I had to.
With NixOS, installing a new machine and having it work exactly like all my machines is minutes of work.
I’ll never lose my hyper-customised setup again.
Running something like Arch or Artix again feels very much like losing my “save” button.
Seconded. I switched to NixOS a year ago after an apt install broke my system one too many times, and so far I've been very very happy with it. I've broken things, but being able to roll back to an exact duplicate of the previous state has been a lifesaver. I can't imagine wanting to go back to repairing broken apt installs.
It mostly goes the other way, I think. The community surveys haven't asked about NixOS desktop usage in particular. Still, I'm certain that a large majority of contributors are running NixOS on their desktops/laptops/workstations.
That said there are prolific and longstanding contributors who focus on non-NixOS and even non-Linux platforms, and corporate users are likely to be running Nix on macOS or Ubuntu (under WSL). It's not surprising that some users who don't use NixOS on laptops or desktops have still become Nixpkgs contributors or maintainers, imo.
Why? It just isn't what draws me to Nix.
I've never even really tried NixOS on the desktop TBH.
nothing. it's just from someone with no experience with nix like me, it feel weird that someone is already deep into Nix but isn't tempted to use it daily.
Maybe it’s everyone else using it on their daily driver that got it wrong?
It’s like doubting Kubernetes because one of the maintainers doesn’t run their desktop in KubeVirt.
I think it's more like Microsoft folks running macs; technically valid, but odd optics. Besides, why would you use KubeVirt to run your desktop? Just run it in containers directly.
What is so interesting about Nix is that it's not one thing. Its not (just) a distro. Its not (just) a package manager. Its not (just) a system manager. Its not (just) a language. Its not (just) a build tool.
It is all those things, but specifically, what you want it to be. Yes, that makes it super confusing, but also powerful.
I've been using nix in a Mac for a year now. Recently I got a new Lenovo machine and first thing I did was install nixos, it's actually much better than I was expecting. You do notice that nix is designed around nixos
You can use it daily, intimately, without using nixos. Using it for dev environments on macos for example, and servers. Did that for years before I installed nixos on my desktop.
As a counterpoint, I'm rather the opposite of you:
1. I use Nix primarily on the desktop (2 laptops, 2 workstations), though I also use it on one server. I don't think I could ever go back to any other Linux distro for my daily-driver. Things "just work" to a degree that they never have for me on e.g. Ubuntu.
2. I quite despise the Nix language; this is not to say that I think it's particularly bad (or good) as a language, just that nearly every single degree-of-freedom in language design that is largely about personal taste takes the opposite choice to what I would prefer
3. I find setting up development environments with it to be very hit-or-miss, to the point where I have in some cases fallen back on what I would do without nix, and used nix-ld to fill in the gaps.
I will say I really love the outcome of a Nix development environment. Especially with nix-direnv having a reproducible build environment by doing git clone on any machine is amazing. NixOS has also saved my ass a couple times doing kernel updates on an old laptop, rollbacks are nice. Having consistent commands "nix build"/"nix run" is great. It's a universal build system that works across different technology stacks. Pain to setup, but bliss when it's working.
The bad part is the impenetrable errors and obscure configuration. Although, with the rise of LLMs I find it's not as bad. Getting a non-trivial flake.nix setup is much easier now. Could never remember the override system before, but can manage with Chat GPT haha.
I really like Nix but recently I ended up in a very tricky situation:
If you you are cut from the internet or end up with a very slow connection you can end up totally blocked. As a minor configuration change can require you to download a lot of data.
I also found out there is not much you can do to protect you from this.
Easy keyboard access is also a baked in assumption.
I found my way to Nix because I wanted to try SteamOS on nicer hardware than the Steam Deck. Bazzite is the recommendation in that space now, but at the time, there were a lot of equally unknown options. There's a community called Jovian that has replicated the SteamOS setup atop NixOS, using Valve's own sources. Using official sources and taking the chance to learn a new functional programming language seemed like as good a place to start as any.
When it works, it's great; however, +1 to all the gripes.
_Everything_ in Nix is set by writing to a text file and calling a CLI to rebuild. If you don't have ready access to a keyboard, you might not be able to so much as change the timezone. You can end up on obsolete versions of evergreen software like Chrome too, because Nix wants to own everything, and nothing changes until you rebuild.
Possibly helpful: You can rebuild a remote system with the --target-host option of nixos-rebuild
If you’re using flakes, this is minimized, as long as you don’t cleanup (GC) your Nix store and don’t update your lock file.
Yes but the problem is the underlying complexity of modern systems. Actually kudos to NixOS for hiding it very well from you. My system is fully configured with flakes and in practice the smallest change in configuration can trigger the need for connectivity, usually to download a new dependency.
I mean if you have VMs in the cloud, who cares. If you have a laptop or a small network that require to be fully operational even if connectivity is lost, think about it twice.
I courted making the switch to NixOS a couple times, but I just don't really see the value add to me right now. Yes, if you have a lot of machines then it maybe make sense.
At this point I just use Nix home manager for my dotfiles/userspace programs on a normal distro and I feel like I get 90% of the benefit without any of the headaches.
The reason I keep it around on my laptop is mostly because of the snapshotting.
I generally do know my way around Linux command line nowadays, but with Ubuntu and Arch (especially early in my career when I didn't know what I was doing), I would get into states that break the video driver, or break GRUB, or make the machine unstable, and the only thing I could do was reinstall the whole OS.
With NixOS, since it's all declarative, if I end up really breaking something, I can always reboot and choose a previous generation. It makes things a lot less scary for me, I can experiment with and play with different boot parameters and drivers and I know that I won't be stuck spending two hours reinstalling everything. It changes the entire way that I work.
For example, on my current laptop (Lenovo Thinkpad, AMD), I was having an issue with my USB ports idling out, so sometimes the first ~4 seconds of my typing wasn't registering since the USB port had to wake up. The solution involved adding a kernel parameter `usbcore.autosuspend=-1`.
Had this been something like Ubuntu, I have been burned enough trying to add kernel params that I might honestly have just lived with the annoyance because I didn't want to risk everything breaking, but because I knew that there was no actual risk with NixOS, I was able to fix it permanently, and I have the solution committed to Git if I ever have to do this on another computer.
Just a side note for those who aren't on NixOS, but who would like 90% of snapshotting: use timeshift. Especially if your file system is BTRFS. It'll do daily snapshots of all your system files, going back 5 days by default. I've only had to use it once, but it was invaluable. Another nice thing is it's very much a set-and-forget program.
Yeah, timeshift is pretty cool too. I think I prefer NixOS's style as it's directly integrated into the rebuild system, and the dedicated Nix store allows me to do the snapshots while also being persistent, but if you don't want to drink the NixOS Kool-aid, timeshift is definitely a valuable tool.
>At this point I just use Nix home manager for my dotfiles/userspace programs on a normal distro and I feel like I get 90% of the benefit without any of the headaches.
If it works for you, sounds good!
I comment because I recently had opposite thoughts - that maybe I should migrate off nix home manager - to keep 90% of benefits (nixos) and avoid all the headaches (home manager quirks). Funny how opposite experience we have.
For me I love nixos because how when I configure something it just works, and how when I break something I can just undo that easily. And I like how my system don't get more cruft with time and stays lean.
I delete your entire system file system right now. How fucked are you?
With NixOS: I don't care. You can recover from a half deleted root file system.
My root filesystem is actually just in-memory for NixOS using tmpfs [1]. If you were to trash my root filesystem, I just reboot and it's restored. I know of no other operating system that allows something like that.
To quote a friend: "A new car smell on every reboot."
Well OpenWRT does, but probably not what you want on your laptop :-)
Heh, probably not, though my homemade router is actually based on NixOS with the ephemeral tmpfs root, so kind of the same idea as OpenWRT.
Lol, please don't. I do take regular snapshots so it probably wouldn't be too bad.
I generally agree.
Nix is an excellent build tool. I use it for all of my projects now. And when building is tricky, e.g. Elixir, I rely on Nix devshells to get my tools setup.
NixOS is an amazing server distro. My primary home server VM is running NixOS and it has been rock solid and easy to maintain. I plan to run NixOS exclusively as I add more machines.
But I haven’t had a good experience with NixOS on my development VM (as compared to Ubuntu or Debian). You end up spending more work than expected up front just to get something working. One recent frustrating experience was trying to get VS Code Server to run on NixOS so that I could connect to it over SSH. Ultimately I just gave up.
> It seems very cool that you can roll back in the case of a catastrophic upgrade failure, but has that every happened to you? Not me.
It did, and thanks to that rollback feature, my system was working in a few minutes.
I recently tried NixOS 24.11 but quickly decided it’s not for me or something I’d recommend. While the system initially seemed promising, it was frustrating in practice.
My first hurdle was configuring the network in configuration.nix. The installed template implies network settings go there, but that’s misleading. Worse, "nixos-rebuild switch" requires a working network, so a broken config leaves the system unable to fix itself – a catch-22.
Next, I tried "nix search wget", as suggested in the manual, but hit errors about missing experimental features. I had to enable both nix-command and flakes manually to get this to work:
nix --extra-experimental-features nix-command --extra-experimental-features flakes search nixpkgs wget
Even then, package search, like configuration, seems to depend on a network connection, which feels unnecessarily fragile.(edit: formatting)
> "nixos-rebuild switch" requires a working network, so a broken config leaves the system unable to fix itself – a catch-22
I just ran into this on an airplane! I'm relatively new to nixos (~4 weeks in) and I had configured my laptop to use DNS over TLS and DNSSEC, which is normally not a problem. But to get through those login gateways for the wifi, you need to disable all of that so they can MitM your DNS requests. "Thats no problem, everything is already in my nix store so I should be able to comment out those lines and run a nixos-rebuild" I thought to myself, but alas I was wrong. I'm sure I could have worked through it with enough persistence but it was a short enough flight that I decided to just wait until I landed to continue my otherwise wonderful journey into the nix.
I've tried Nix on a couple of occasions, most recently about two months ago, and ended up coming to the conclusion that it's just not for me.
I can see the value in a completely declarative configuration for my OS.
But the hurdles to get something worthwhile out of that value prop are just too high with my (low) level of skill (in this area) coupled with the limited time I have to build new skills. There are other things I want to invest my time in, but I can totally see this being where someone wants to spend some of their time.
I've never found setting up a Linux distro the way I want it particularly hard and once in a while I like to just start from a blank slate to see what's new—so yeah, not for me.
Just happened to notice how the author contradicts himself by stating "I love to distro hop" but later in the article "I don't just like to distro hop".
A good article nonetheless.
I'd like more clarity on this:
> The advantage over docker here is that (when using Flakes) Nix builds are completely reproducible. Docker containers may be isolated, but surprisingly they are not deterministic out of the box. With some work you can make docker deterministic, but thats what you need, its much easier to use Nix.
as the whole purpose of the Dockerfile is to create a reproducible environment.
The whole purpose of the Dockerfile is not to create a reproducible environment. The purpose of a Dockerfile is to run a bunch of commands inside of a container and save the output. Those commands may or may not produce the same output every time they're run.
For example, if you have a debian base container that you run `apt install nginx` in, what version you actually get depends on a lot of different things including what the current version of nginx is inside of the remote repositories you're installing from _when the docker build command is executed_, not when the Dockerfile is written.
So, if you do "docker build ." today, and then the same thing 6 months from now, you will probably not get the same thing. Thus, Dockerfiles are not reproducible without a lot of extra work.
Nix flakes are not like that - they tag _exact_ versions of every input in the flake.lock, so a build 6 months from now will give you the _exact same system_ as you have today, given the same input. This is the same as like an npm lock file or a fully-specified python requirements.txt (where you have each package with an ==<version>).
So, you definitely can make Dockerfiles reproducible, but again, the Dockerfile itself is not made to do that.
Hope that helps your understanding here!
> For example, if you have a debian base container that you run `apt install nginx` in, what version you actually get depends on a lot of different things including what the current version of nginx is inside of the remote repositories you're installing from _when the docker build command is executed_, not when the Dockerfile is written.
Its even worse. Its not the current version when the command is executed, its _the current version taking the layer cache into account_, which is a classic docker gotcha in needing to do single line `apt-get update && apt-get install` to sidestep. The layer cache really makes it hard to reason about.
Do any of your Dockerfiles make e.g. apt calls? If so, then they will get a different version of software installed when built on different days, because that will depend on the state of the package servers.
A more trivial example of non-deterministic would be that you can write a Dockerfile that uses curl to fetch data from random.org; the functions nix provides for fetching from URLs require you to specify the sha-sum of the data you fetch.
Nix flakes make it hard for you to inject anything into your dependencies that hasn't been hashed to confirm its identity. It in many cases still isn't 100% deterministic (consider e.g. a multithreaded build system where orderings can influence the output), but it's a big improvement.
Its reproducible at a superficial level. Tags are mutable, so someone can push a different “3.1” between build 1 and 2, which results in a different build. You can also be fuzzy with tags, so if you say “from nginx:3” as your base (or nginx:latest) then build 1 and 2 can change because of a new tagged build upstream.
Then theres the million app-level changes that can creep in, eg copying local source is non-deterministic, apt-update, git clone, etc. Nix requires you to be fully explicit about the hash of the content you expect in each of those cases and so if you build it twice it is actually the same build.
I guess you could consider a docker image a "reproducible environment," but it's certainly not a reproducible build; running docker build twice on the same directory isn't guaranteed to give you the same image. You could put in the work to make it a reproducible build, but it doesn't do anything to help you achieve that. Nix defaults to reproducible builds, and requires flags for "impure" non-reproducible builds. It does this by requiring all dependencies be managed by nix, and all sources be copied into the nix store.
Author here.
The idea with nix flakes is it has a lock file which should guarantee the same build. This is like package-lock.json or pdm.lock which contains dependency checksums for every package.
Docker works more like your standard package manager. If you ask for mysql 5, today you may get mysql 5.1, but next week you may get mysql 5.2. So it does not come with a guarantee.
Containers are only reproducible at run time not at build time. Once you build a container and pull it down by its sha256, you'll get the same environment each time. However if your Dockerfile does any IO (curl, apt get, pip install etc) you're quite likely to get different images on different machines.
Docker images are just as reproducible as binary blobs, which is essentially what they are.
Binary blobs can be easily reproducible, depending on how they’re built. By comparison, the “easily” part doesn’t apply to any non-trivial Docker image.
The issue is that you have to lock down all your dependencies, including local data, repos, and registries. Most people, and even most companies, don’t have the resources to achieve that, so they simply don’t do it.
Further, Docker doesn’t provide any significant mechanisms to help ensure reproducibility in the face of these issues, so you can’t say that Docker supports reproducibility.
(Declarative rclone https://github.com/nix-community/home-manager/pull/6101)
Neat. Thanks.
Nix has been a godsend at dealing with different dev environments and (cross-)compiling complicated software stacks. When you write a proper nix derivation that runs inside a nix sandbox it does not matter where you run it, you'll have high guarantee it replicates the result for you anywhere.
So a server that's dedicated to well-supported(by nixos) services running NixOS is awesome. it's easy to upgrade every 6 months and generally very painless. Everything else is a PITA though. Of course if you use an LTS like Debian Stable or Ubuntu, you only have to upgrade every 5-ish years, so unless you always need the latest and greatest release of something, it maybe isn't worth the hassle.
Trying to hack on other people's junk with NixOS is just asking for pain. Just use Ubuntu LTS like everyone else. That's generally easy and painless.
"Trying to hack on other people's junk with NixOS is just asking for pain."
To me that's a large part of the very definition of a useful general purpose OS is that it's flexible and enables you to do whatever you need to do today, without the developers having previously somehow planned and provided for exactly that thing.
It's like the systemd argument all over. The exact thing systemd aims to prevent is the exact thing that made the original unix so powerful and useful that 40 years after architecting it, it still worked because they didn't try to think of every possibility, they gave you a toolbox that let you do whatever you might need to do. Where systemd sees a shell script as "unmanaged chaos" I see "unconstrained utiliy", a useful toolbox including a saw that doesn't have it's own opinions about me what boards I can cut.
If "Trying to hack on other people's junk with NixOS is just asking for pain." that is basically the definition of "this is not a useful operating system that empowers me to get things done". It's useful maybe as a crafted firmware for a static device.
(Not saying that nixos inflexibility is driven by the same paternal "we'll give you the whitelist of actions Poettering thinks are valid" attitude as systemd. In nixos it's merely a natural consequence of indirection and layering. They aren't trying to remove any agency from the user/admin, it's just the simple indirection itself that makes pre-planned and standardized things easier at the expense of anything direct and unplanned becoming harder.
Like instead of having an OS that may or may not be driven by ansible, let's replace the OS with just ansible, and now there is no way to do anything any other way except by figuring out how to write a playbook to do it.)
> To me that's a large part of the very definition of a useful general purpose OS is that it's flexible and enables you to do whatever you need to do today,
NixOS gains most of its power from restrictions. These restrictions enable awesome things like starting a shell with all dependencies in seconds versus minutes using alternative technologies (used by great effect by replit). Nix works surprisingly well for most software, but anything with a ton of dynamic dependencies is going to cause issues. Even knowing what the dependencies might be statically can be hard. Sure, providing an OS with no restrictions and complete flexibility is an option, but then you'll just end up no better off.
Whatever the future of operating systems will be, it certainly will involve more restrictions and less flexibility.
Not to be too presumptuous, but you sound like someone would might like Gentoo. It still works without systemd, though it does install sys-apps/systemd-utils (mostly for the /dev FS stuff). I'd say the focus of Gentoo is on "managing choice", and it is true that some choices can make your system(s) diverge from the most frequent instances floating around (but those tend to be systemd-based these days). It's still pretty decent, though. I've been incrementally upgrading the same basic install since Jan, 2007. Of course, you may already know all about it and have other opinions.
Safe presumtion. I never used it for real but for instance I like freebsd and the ports system, prefer macports to brew etc.
> Trying to hack on other people's junk with NixOS is just asking for pain.
Yeah, but being that Nix is essentially a giant wrapper for the system, that kind of goes without saying. The other side of the coin is that, using other people's Nix junk is extremely easy. Far easier than what any other distro could hope to achieve.
My favorite example is simple-nixos-mailserver. Try passing someone a dovecot, postfix, and openssh configurations/instructions in any other distro and see how long it takes before they mess up, or more likely, give up.
Whereas with simple-nixos-mailserver, you're guaranteed to get something to work, essentially right out of the box.
agreed. Like I said, if what you want to do is within NixOS's well supported wheelhouse, it's great to have a fully declarative OS, that includes application configuration.
LTS is harming the industry and holding everything back! IMO it is the wrong direction.
Why do you think that? That seems like a pretty extreme viewpoint to me.
Stability is a great thing for busy professionals that want stuff to just work.
How many apps have you upgraded that have crashed and burned from the update? Me, a lot. both commercial and OSS. With OSS at least you get all the pieces so you can figure out how to put it back together again. With Commercial, you rollback, file a bug report and hope someone somewhere in the company will be incentivized enough to fix it for you.
To mitigate breakages we should be aiming for better test coverage, at various build levels: class, package, program, system.
Our industry's story for system-level testing, for Linux distributions, is poor. NixOS tests are decent, but need more coverage, and something similar needs to be available to upstream so issues are caught during development.
Meanwhile, LTS releases have downsides:
* Alienating you from upstream: why contribute upstream if you'll only benefit from them in 2 years.
* Having to support stable versions makes refactoring harder. Developers don't want stable to be too different, lest backporting becomes too tricky.
* Maintenance costs is sunk, compared to if we can make rolling release reliable (see above re tests, and easy rollbacks).
https://abseil.io/about/philosophy#we-recommend-that-you-cho... is the same idea but from Google C++ team.
If your software is in such heavy development that you need changes all the time, it should never be in a stable distro to begin with, it's not stable code.
Overall(last I checked), the testing is roughly equal between the stable distro's(Debian/Ubuntu/etc) and NixOS. The difference is stable distro's back-port bug-fixes. NixOS rarely does, since their release cycle is only 6 months long.
> * Alienating you from upstream: why contribute upstream if you'll only benefit from them in 2 years.
I contribute upstream, regardless of if I'm running NixOS, Debian Stable or Windows. It makes no difference to me which OS I'm running when a bug shows up. If I find a bug in X package, I go fix X package. Sure I also fix it locally in my running instance(s), but that's my problem, regardless of which OS I'm running.
> To mitigate breakages we should be aiming for better test coverage, at various build levels: class, package, program, system.
Yes, yes we should. Most software has a terrible testing story. There are very few pieces of software with robust testing. SQLite is one such. One could probably name a handful of others, but after that the list gets really hard to add to.
> If your software is in such heavy development that you need changes all the time, it should never be in a stable distro to begin with, it's not stable code.
Stable code by this definition experiences some stagnation. But the cost of stagnation is worth it for the stability. That's LTS.
Slowly, we'll build enough checks that we can achieve frequent change and still be stable. This is partially here, and "unevenly distributed".
> Overall(last I checked), the testing is roughly equal between the stable distro's(Debian/Ubuntu/etc) and NixOS. The difference is stable distro's back-port bug-fixes. NixOS rarely does, since their release cycle is only 6 months long.
NixOS has system-wide tests that run on PRs, and go green if they pass. E.g. upgrade OpenSSH will trigger a suite of VMs to start, each running OpenSSH in different configurations, and checking they work as expected. These run automatically, are visible to contributors/reviewers, and take O(minutes) to complete. They run on automated backport PRs too.
> contribute upstream, regardless of if I'm running NixOS, Debian Stable or Windows. It makes no difference to me which OS I'm running when a bug shows up. If I find a bug in X package, I go fix X package. Sure I also fix it locally in my running instance(s), but that's my problem, regardless of which OS I'm running.
Bravo. But I don't think it's controversial to suggest that, on average, the closer a person is to upstream version, the more likely they will be to have the motivation and success in making a contribution that meets theirs and and upstream's needs.
I've been using nixOS on my laptop for over a year now and I still don't have an answer for 'my version of firefox/darktable has a bug in it, but I can't update it without upgrading the entire rest of all the software installed on my machine.' I keep thinking there has to be a way around this, but it doesn't seem like there is one that's clean and not hacky / brittle. Other than that I love it, but that's a pretty huge caveat
Have you come across Nix Package Versions¹ yet? If you’re looking to work around a recent bug or other unwanted change by installing a slightly older version of some package from nixpkgs, Marcelo Lazaroni built a nice page to help with that and wrote up an explanation² of how it works.
This only works for versions of your package that do exist in nixpkgs but aren’t currently the default for your chosen channel, so it doesn’t help if your channel is out of date and you want to install a newer version that hasn’t been packaged yet. But then that’s the case in almost any Linux distro if you rely on installing your software from the distro’s native package repo, and much the same solutions exist in NixOS as in other distros. Although if you’re really determined, you can also start writing your own derivations to automate building the latest versions of your favourite applications before they’re available from nixpkgs…
¹ https://lazamar.co.uk/nix-versions/
² https://lazamar.github.io/download-specific-package-version-...
As with most situations in Nix, there is an elegant and clean solution; but that solution is also hacky and somewhat obfuscated.
The problem really stems from how tightly entangled packages are to the nixpkgs source tree. Nix offers the most foundationally modular system possible, and it organizes its packages in a monolithic source tree! This means that despite installed packages being totally isolated in the /nix/store/, the package source (what Nix calls a "derivation") is semantically tied to whatever specific dependency version was implemented in the contemporary nixpkgs source. If you want to provide users more than one version of a package inside the same source tree, you must put the version in the name, like SDL2 or python3.11.
I started this GitHub issue a long time ago: https://github.com/NixOS/nixpkgs/issues/93327. Somewhere buried inside may lie the answer to your question. Either way, I have mostly given up on wrapping my head around the current ecosystem of half-baked solutions to this mess; despite still actively using NixOS in ignorance.
Just want to flag that the first image likening Nix to the Holy Trinity has a spelling error in the text "The Operating Systam".
The whole thing is full of speling and grammar errors. The author desperately needs a copy editor.
I've used Debian for 3 years and all the problems I had with it and that are now solved with NixOS and replaced with a complete new set of problems I could only dream in my wettest dreams on Debian. Only going back over woodwork to Debian.
Im temporarily keeping nix on single use machines and lxc. Eg places that are just a docker host etc. for that and cicd uses cases it should be fine.
The multitude of ways things can be configured spooked me a bit for desktop use though.
for me there are two kinds of pain in software troubleshooting:
1) I Don't Know The State: there's some weird little bit of state hanging around somewhere I don't know about that's messing with my end result. Once I learn about this state, correcting it is trivial.
I hate this kind of problem solving, it's not mentally stimulating, I don't learn a lot, looking up answers online is often not helpful. And when I fix it, I don't gain a lot beyond just the problem not happening anymore. (examples: secret config file i didn't know about, an application edited its own config, file permissions were wrong, symlink was wrong, cache is invalid, etc.)
2) I Don't Understand The Concept: the idea of what something is supposed to do or why hasn't clicked for me yet. Getting to that state will take some time as I wrap my head around it. Once I do, I have that knowledge forever and can build on it to understand even more concepts.
I _love_ this kind of problem solving. It's mentally stimulating, it builds on itself, increases mastery, problems are easily found / troubleshot online because people are dealing with similar issues and not their own machine's personal state.
NixOS has been almost entirely the second kind of problem solving for me. The first two weeks were basically a full time job of fussing with my config but once I got it, I got it forever. Writing my first derivation was confusing, but now it's easy and it'll always be easy.
I think this is why Nix has been able to "get away with" having the abysmal, fragmented documentation that it has for so long - it's so good at being a near-stateless, all-encompassing declarative configuration that even outdated blog posts, random people's personal configs on github, even the nixpkgs source code can be helpful enough to solve your problem (and that's often all you have to go on!!!)
Anybody know what author is talking about with gnome and plasma not being allowed in the same configuration? I don't currently have both configured, but I ran this way from somewhere in the neighborhood of 20.09 to 23.05 with no difficulties.
I've been daily driving nixOS on my desktop, and manage my work and personal macOS machines with flakes. I love this ecosystem, and have even managed to make folks at work use nix devShells -- frankly, the learning curve is pretty steep; the payoff is that I've learned so much. I've been very happy with it -- I run a windows VM with libvirt/KVM/QEMU when I want to game, and use the same to run "local" LLMs. While docker is a great technology, I actually prefer using nixOS containers (which are, under the hood, systemd-nspawn containers).
When I worked on my startup briefly, I built nixOS images with everything needed for raspberry pis; all I needed to do was use dd to burn the image to an SD Card.
For me, nix is a wonderful and perfect solution for building stable software.
My setup is a smorgasbord of dotfiles and symlinks. It's been built up over 13 years, so it's very well seasoned and has served me well, but I've long been meaning to move to Nix to make my setup cleaner. The bad in this article isn't that bad, so now I'm amped even more to do this!
It reads like the author didn't even try to understand NixOS.
> But not always. Why? Because there are so many ways the context can vary as everyone is doing different things. Is that config you found:
This whole section reads like author gives ChatGPT code snippets and asks it to explain.
Nix snippets are not very copy-pastable because everyone creates different abstractions to reduce boilerplate. I have probably done "JSON -> list of packages" conversion 3 different ways in my codebase.
I don't think he touched on whether server side is a more valid use case, but was nice to read someone elses take on using it for a desktop. Thanks for the contribution.
He did find functional programming to be sort of mystic so I don't know if I trust his take on assesing the nix language itself.
TLDR; just stick with ubuntu or arch unless you feel like experimenting
Author here. Your TLDR is spot on. Yes, my intent was to be on desktop use since most things I read dont consider that specifically. I did talk about how I would keep this running on some simple home servers since I think that's where Nix shines. But some of my servers are raspberry pis, which I mentioned I am worried to run Nix on due to resource limitations. I should probably just try it.
I wish remote build/deploy for Raspberry Pi was in a better state - it seems like a perfect fit for NixOS.
I've got x86 servers running NixOS that are deployed using Colmena, but it seems to fall apart when I add cross compilation into the mix.
I'm running NixOS on a raspberry pi and I deploy to it with deploy-rs¹. This works pretty well for me. My dev machine is an Apple Silicon laptop with nix-darwin installed and I use its nix.linux-builder module to run an aarch64-linux VM as a remote builder to build the rpi's system. All this means the rpi never has to do any building itself, and doesn't even need the nixpkgs source installed either.
If you want to do this yourself, I recommend using https://github.com/nix-community/raspberry-pi-nix so the system is configured much more closely to how the stock raspberry pi image works. The benefit of this is better reliability of stuff like bluetooth.
cross compilation is hit-or-miss, but using qemu/binfmt works just fine, if a bit slow.
Desktop NixOS is my daily driver for almost 3 years. I don't bother diving into deep technicalities, flakes or other complicated stuff. I define programs and settings that I need in the configuration.nix. That's all. And it works perfectly!
For complicated stuff I run containers such as docker or podman (you could use distrobox too), so I don't have a headache while trying to achieve it in NixOS (but I respect everyone who does this and makes this system grow).
Have my server infrastructure on NixOS. Huge boost in productivity and stability, would never go back to standard linux. But man if something breaks it's a nuke going off. Sry for the long post, but thought I share my experience:
Mass storage on a big encrypted RaidZ array of spinning rust, no issues. Bootloader, /boot, Encrypted Root and Databases on Mirrored NVME Drives. And man is that a nailbiter on each update. Setup my drives during 22.11 following NixOS Root on ZFS instructions [1], which were amended following reports of systems becoming unbootable [2] and mostly removed later [3].
Besides initially being a broken setup [4] with an increasing amount of mounts each update stalling any system writes and causing updates to fail, it became a well running machine after that was fixed. Then during the 24.05 update and with no config change, the system became unbootable [5]. After a tough recovery [6] I never figured out how to do mirrored bootloaders again, switched to a single bootloader setup. To this day I have interactions I don't understand and am trying to fix [7], which sometimes causes updates to knock services offline due to the `nixos-rebuild switch` process stopping services, going to update the bootloader, failing due to missing mounts and exiting with services being offline, prompting manual intervention.
[1] https://github.com/openzfs/openzfs-docs/blob/1211e98faf1f37a...
[2] https://github.com/openzfs/openzfs-docs/commit/1211e98faf1f3...
[3] https://github.com/openzfs/openzfs-docs/commit/4fb5fb694f44c...
[4] https://github.com/NixOS/nixpkgs/issues/214871
[5] https://github.com/openzfs/openzfs-docs/issues/531
[6] https://github.com/openzfs/openzfs-docs/issues/531#issuecomm...
[7] https://discord.com/channels/568306982717751326/132854109967...
I replied to your issue at https://github.com/openzfs/openzfs-docs/issues/531. Not sure if it will be any help, but that's how I'm doing my mirrored zpool EFI boot partitions.
Instead of having an OS that might be driven by ansible, let's just replace the OS with ansible.
Yeah, same.
I originally had a very positive nix experience. I had to fix some docker build issue at work. This particular dockefile needed the buildx command. Except for whatever reason I just couldn't install the needed component for buildx to work. Somewhere during a messy Ubuntu 18.04 to 22.04 upgrade something broke and now I was getting a conflict (with nothing btw, it's just empty).
I used nix to install docker and the required component with no hassle and was able to get on with my work.
Inspired by this experience I wanted to evaluate nix for a more long-term usage. I setup my personal PC to dualboot windows and nixos. The trouble started basically immediately.
I couldn't get KDE plasma 6 to work with Wayland. It turns out that you need to also enable Wayland support in sddm. Why the official installer did not do this or why this was only documented on some random unrelated page in one of the two wikis I couldn't tell.
Ok, video working I immediately spotted another problem. I set up my disk encryption, login and kde wallet passwords to match in the hopes that on boot I will only have to enter it once (this is an option you can enable). Login screen worked, but kwallet just wouldn't log in automatically. I spent literal hours on this over the past year each time walking away frustrated. Is this a nix issue, Linux issue or kwallet issue? Who knows. Eventually I just set an empty password for kwallet. This still didn't solve my git ssh keys not loading automatically. I just gave up.
Most importantly, dev environments just do not want to work with VSCode. The recommended extension stopped working years ago, the other extension still works but there's no way to force a specific load order so randomly other extensions will just not see your nix env. I had to resort to installing rust globally just so I could get rust analyzer working. So much for dev environments. I don't even want to go over how much working with package managers when using nix for building sucks (particularly npm/pnpm/yarn).
If I had installed nix on a work laptop they would have fired me for wasting time. Next time I'm trying one of the immutable config driven distros. Hopefully that is going to work better.
Thoughts on a love/hate relationship with Nix on the desktop.
I dabbled with NixOS many years ago for a much shorter time than the author has spent on it, so I have much less experience with it. My main problem with it was that the problem of declarative config is basically solved at the software-level already.
System services have always been able to be configured by dropping files in /etc. Lots of software also specifically supports config dropins, so that merging configs from multiple sources is even easier. Even stuff like creating users and groups can be done on systemd distros by dropping in config files. Similarly configuring user software is mostly about dropping files in ~/.config etc.
Package managers vary. Alpine's apk is the best in that `/etc/apk/world` is the list of packages you want, and every run of `apk upgrade` will install and uninstall packages accordingly to match that list, but it doesn't have dropins. apt and zypper don't even have a config file, only an opaque database that you can only interact with using the commands. But you can maintain your own packages list and then script apt/zypper to diff against that list and install/uninstall accordingly. (zypper in particular does maintain a list of packages, except it's a list of packages that you *don't* install explicitly but are just auto-installed as dependencies of the ones that you did, which is funny to me.)
I get declarative config on all my devices (a mix of OpenSUSE, Debian, Ubuntu and postmarketOS, across servers, desktops, laptop, phone) with just an Ansible-like setup. For each device I have a `$hostname.roles` file that contains one role per line. Each role corresponds to a `role.$role` directory that contains any files that should be deployed as part of that role (both under homedir as well as at the system level) as well as a `packages` file that lists any packages that should be installed for that role. Then there's a small shell script that matches the hostname of the machine against this directory and ensures that all files exist, and that all the required packages are installed and no extras are installed. Also the entire directory is in version control so I have a log and reasoning recorded for every change.
The author mentions rebuilding a laptop to do another laptop's job by applying the other's Nix config. I have also used my script to rebuild a few devices after their disks died and I had to reinstall the OS from scratch, so it ticks that checkbox too. And of course I've added and removed roles occasionally to add/remove features from individual devices.
NixOS would give me a way to rollback the entire config, but I can also do that with this. In case I need to rollback to packages that no longer exist in the distro repository, I have btrfs snapshots to roll back to.
NixOS would give me a way to install multiple versions of packages, but this is something I've never needed. I primarily stick with distro software so it is always consistent, and if two distro softwares require different versions of the same dependency, distros already know how to solve that (make two coinstallable packages).
The one time I tried to build someone else's Nix project (and they even had a Dockerfile with nix in it to do the build so it would be completely independent of the host), it didn't build for me, so I'm not sure how reproducible it really is. But that might've just been a problem for that one project.
I'm sure Nix(OS) has benefits for other people, but for me the benefits that I would care about are handled entirely by dropping files in the right places, and I don't have to use a different OS or package manager or bespoke programming language to do that.
> an Ansible-like setup
Using another common tool (salt/chef/puppet/...) or something home-rolled? (Just asking because I'm interested in new options in this space)
As I said, a small shell script I wrote. The only things it needs to do are sync files according to the `filelist` files and sync packages according to the `packages` files, so I don't need the full verbose DSL of Ansible etc.
Also it means it has no dependencies on the target machine. You might say Ansible doesn't need anything on the target machine except ssh, but it does require the target machine to be reachable over ssh, which is not necessarily the case if I'm rebuilding a machine such that its network is not already configured. So in that case all I need to do is sneakernet my git repository over and then execute a shell script.
How do you deal with removing a package? For examplet the case where you have htop in your config, but no longer want it on your system or in your configs.
The script builds a list of "expected packages" for the host by unioning the `packages` files in all the roles of the host. Then it enumerates all the packages that are "intentionally installed" (*). If there's a difference, it prints the difference and I add what needs to be added and remove what needs to be removed.
(*): This depends on the package manager:
- For Alpine / postmarketOS it's just the content of `/etc/apk/world`.
- For Debian / Ubuntu it's `apt-mark showmanual`.
- For OpenSUSE it's `zypper search --installed-only` (which includes both intentionally and automatically installed packages) and then subtracting the contents of `/var/lib/zypp/AutoInstalled`.
Whenever I see "declarative" these days, I get suspicious.
See what people are promising when they say "declarative" is a dream. You just tell the computer the final state you want, and the language does what it needs to do to get there.
The problem here, is that in order for that to work, the language has to know how to get where you want it to go, and give you vocabulary to tell it that. That "how to get there" and "vocabulary" has to be written in a (gasp!) general-purpose language.
So now, you come across a situation where you need the declarative language to do something it doesn't know how to do. So now you're forced into the position of helping create the declarative language. And as it turns out, creating a declarative language in a general-purpose language is a lot harder than just using a general-purpose language to tell the computer how to go where you need it to go in the first place, because you have to delve into an existing codebase which is inevitably giant.
I've begun to refer to this anti-pattern as "Configuration-Oriented Programming" (COP).
I'm being a little facetious here--I've not looked into Nix enough to have any educated opinions on it. But it sounds like my outsider impression that Nix is a COP might not have been too far off.
The older I get, the more I realize that so much of the divide in the tech field is simply between the two camps of "the tools are the interesting part" vs "getting things done with the tools is the interesting part".
Your veiled implication that Nix and NixOS aren't about "getting things done" is, I think, more than a little unfair. I'm using multiple programming languages at work. Each one of them has its own dependency manager that does basically the same job as the other ones. In Python it's Poetry, in Ruby it's Bundler, in JavaScript it's npm/yarn, in PHP it's Composer, etc. A lot of projects require extra setup steps outside of the dependency manager. It's not a good experience that lets you get up and running quickly. And my situation with scripting languages isn't the worst case: God help you if you have dependencies between projects in AOT compiled languages that use different dependency managers.
Of course, the standard answer is to spin up a ton of Docker containers. Docker works, but it looks to me like a local optimum rather than a truly painless solution. It sucks as a build system, and Dockerfiles not being reproducible is the default outcome that needs significant extra care to avoid (how many times have you seen apt update or some equivalent in one)? Besides, why should I have to worry about a whole another OS inside my main OS, with potentially different tooling and conventions, when what I really want is just specific versions of a couple of tools?
I think we've gotten used to development environments being a shitty experience to the point where it seems part and parcel of programming, but when you take step back, it's apparent that the situation causes a lot of frustration and wastes a lot of time. To me, Nix's combination of package manager and reproducible build system looks like one of the most credible ways out. NixOS' declarative configuration and rollbacks are nice side benefits too, for server admins and newbies respectively. Nix just needs a lot more polish. I'm not about to introduce a tool where the most common workflow is still considered experimental. For now, I'll keep using Docker, but I watch Nix with interest and can't wait until its UX matures.
EDIT: Removed claim that Bazel and Buck's creation was motivated by multi-language support. Looks like the main motivations were speed and reproducibility.
> I think we've gotten used to development environments being a shitty experience to the point where it seems part and parcel of programming…
I have staunchly refused to allow this at my current employ, and I’ve been there long enough where I can steer this.
This isn’t acceptable and all it does is help introduce inconsistency and regressions.
I realize I am fortunate in that I can effect change at my job in this arena.
What's been your strategy for doing this
Well the short answer is: I defined the workflow/process for our team, everyone bought in, and new people don't really have a choice but to follow along.
The longer answer is, earlier in my career I ended up spending a not-insignificant amount of time helping people debug things, and about 70% of the time the issue was their local build environment. Everyone did it differently and it was very messy.
I only made a few "rules" to follow. 1.) Your local folder structure must mirror exactly the structure in our central repo. 2.) Must use relative paths, which works when you follow rule 1. 3.) No ad-hoc "new shiny things" without team buy-in. If it can be used locally only and won't affect the team obviously I don't care.
I use Nix and agree with most of what you said, except the main value proposition of docker-on-dev-machine is not convenience, but approximation of production
Nix works extremely well _with_ Docker! That's what makes it so interesting.
https://www.youtube.com/watch?v=0uixRE8xlbY
That is not why neither Bazel or Buck was created.
Bazel is the bane of my existence. I have never seen such an anti distro tool that is so hostile against your system libraries.
That's the whole point, you can't rely on systems for hermetic builds.
It's not? I recall reading that coping with a variety of languages was one of the main motivations, but do correct me if I'm wrong and you have a citation.
They do hermetic builds so that it is viable at large scale via granular caching. They were made to make that work at those scales, with pretty much zero consideration for any external package managers or anything like that
Hm, I did some searching. Bazel's FAQ mentions multi-language support prominently, but only suggests speed and reliability as the initial motivations. I'll edit my post.
Yeah that is a thing for sure, was just commenting on why it was created :)
Naturally the solution is to use a complete different OS stack. /s
The more I use nix, the more I understand it's both. Nix is genuinely so fucking great, but the ecosystem and docs and language are a mess. It needs to be cleaned up, and things _are_ getting better.
The core philosophy of Nix is so damn solid though, and that's the real innovation here. As long as its philosophy manage to stick around, then it's ok.
This is how I feel about Nix.
I've given it a try and it's quite incredible how easy certain things are.
The problem is the Nix language and the developer experience are really rough.
There's a chance that Nix could figure this out or a competitor with Nix-like ideas could become mainstream. That's my hope anyway.
It would be a shame if the ideas behind Nix got dismissed because of the current issues with Nix.
It's basically, I refuse to learn how to containerize.
Just learn, use, promote best practices and stop forking the ecosystem _even_ further...
There, I got that off my chest.
As a heavy container user myself - I've been using containers since I needed to build my own 3.x kernel to test them - docker doesn't solve the reproducibility problem nix solves - IE, I can make a Dockerfile that does `RUN curl foo.com/install.sh` and who knows if that'll work ever again. Nix on the other hand doesn't allow you to do IO during builds[^0] only describe the effect of doing the IO.
[0]: Though apparently darwin (mac) doesn't support sandboxing by default, so you can bypass that but anyway
>who knows if that'll work ever again
Unless you restrict your nix files to specific channel revisions, which when I had to deal with it was poorly documented, and involved searching through specific channel commit hashes in a particularly opaque way, you also can't know that your nix derivations will ever work again.
A number of people on my field used nix as a way to make their research code repositories reproducible, and everything broke within around three years.
Yeah that's a ux papercut - pre flakes nixpkgs was always the nixpkgs on your machine. There is docs but if you're not expecting that to happen you wouldn't think to look up the docs.
You can just store the actual container though. Which will reproduce the environment exactly, it's just not a guidebook on how it was built.
The value of most reproducibility at the Dockerfile is that we're actually agnostic to getting a byte-exact reproduction: what we want is the ability to record what was important and effect upgrades.
> Which will reproduce the environment exactly, it's just not a guidebook on how it was built.
By that logic every binary artifact is a "reproducible build". The point of reproducibility isn't just to be able to reproduce the exact same artifact, it's to be able to make changes that have predictable effects.
> The value of most reproducibility at the Dockerfile is that we're actually agnostic to getting a byte-exact reproduction: what we want is the ability to record what was important and effect upgrades.
More or less true. But we don't have that, because of what grandparent said; if a Dockerfile used to work and now doesn't, and there's an apt-get update in it, who knows what version it was getting back when it was working, or how to fix the problem?
I do get the theoretical annoyance of how it’s technically not reproducible, but in practice most containers are pulled and not built from scratch. If you’re really concerned about that apt-get then besides a container registry you’re going to host a private package repository too, or install a versioned tarball from a public URL, but check the hash of whatever you’re downloading and put that hash in the dockerfile.
So in practice.. if the build described in the dockerfile breaks, you notice when you’re changing / extending the dockerfile.. which is the time and place where you’d expect to need to know. My guess is that most people complaining about deterministic builds for containers are not using registries for storing images, and are not deploying to platforms like k8s. If your process is, say, shipping dockerfiles to EC2 and building them in situ with “compose up” or something, then of course it won’t be very deterministic and you’re at the mercy of many more network failures, etc
> If you’re really concerned about that apt-get then besides a container registry you’re going to host a private package repository too, or install a versioned tarball from a public URL, but check the hash of whatever you’re downloading and put that hash in the dockerfile.
Right, but if you're doing that then you probably don't need to bother with the docker part at all.
> you notice when you’re changing / extending the dockerfile.. which is the time and place where you’d expect to need to know
You notice, sure, but you can't see what it was. Like, sure, it's better than it failing when you come to deploy it, but you're still in the position of having to do software archaeology to figure out how it ever worked in the first place.
Like, for me the main use case for reproducible builds is "I need to make a small change to this component that was made by someone who left the company 3 years ago and has been quietly running since then", and you want to be able to just run the build and be confident it's going to work. You don't necessarily need to build something byte-for-byte identical to the thing that's currently running, but you do need to build something equivalent. The reproducibility isn't important per se but the declarativeness is, and with Docker you don't get that.
The issue is it's why are you trying to be reproducible. The best use case is proving authenticity: that the source code became the binary code as written, but we're so far away from that that it's not realistic.
My dream system would be CI which gives me a gigantic object graph and can sign the source code from the ground up for every single thing including the compiler, so when a change happens you can drill down to what changed, and what the diffs were.
None of this has anything to do with Dockerfile but the tools used within.
Nix provides the tooling to do reproducible builds. Meanwhile docker is a wrapper around the tools you choose.
Also just to note, docker does allow you disable network access during builds. Beyond Dockerfile, which is a high level DSL, the underlying tech can do this per build step (in buildkit LLB).
I'm not talking about a bit perfect reproduction though, just being able to understand dependencies. Take for example a simple Dockerfile like
``` FROM python:latest ADD . RUN pip install foo ```
If I run this today, and I run this a year from now, I'm going to different versions of `python` and `foo` and there is no way (with just the Dockerfile) to know which version of `foo` and `python` were intended.
Nix on the other hand, forces me to use a git sha[^0] of my dependency; there is no concept of a mutable input. So to your point it's hard to 'upgrade' from version a -> b in a controlled fashion if you don't know what `a` even was.
[0]: or the sha256 of the depedency which yes, I understand that's not easy for humans to use.
Well, what about "FROM python:3.18" and using requirements.txt or something like that? I mean, running an arbitrary Python version will get you in trouble anyway.
Depends on the repository used, actual version revision of Python, compiled features, the way the requirements file is written, the way the current version of pip resolves them, the base is that Python image and a lot of other things. Doing that gives you maybe 30% of the way towards something reproducible and consistent.
There's no mechanism to enforce this is done consistently. With nix, there is.
The degree to which this guarantee is useful or necessary depends on your use case.
Containerizing an application is far easier than packaging an application for Nix - I think most avid Nix users would agree with that.
The reason why Nix users "refuse" to containerize is that Nix packages and their associated ecosystem come with a host of benefits that their containerized counterparts do not.
Nix handles containerization better than Docker does.
Here is a flake that builds a Go app and a Docker image for it (based on headless Chrome): https://github.com/aksiksi/ncdmv/blob/aa108a1c1e2c14a13dfbc0...
And here is how the image is built in CI: https://github.com/aksiksi/ncdmv/blob/aa108a1c1e2c14a13dfbc0...
here is a derivation that fetches https://www.usememos.com/ from source, changes the color palette, builds a docker image out of it and spins up a container that traefik exposes automatically: https://gist.github.com/knoopx/afde5e01389e3b8446f469c056e59...
Very cool! I actually considered implementing the Compose Build spec this way for compose2nix, but instead opted to just use Docker/Podman directly.
You're going to need to do more than just link to the flake if you want to show why that's better than the Dockerfile equivalent, because the code itself isn't selling it.
1. Unbelievable layer reuse out of the box. Each Nix build output is placed in its own layer, including your binary (up to a max of 120 or so layers). Rebuilding the image will only result in the final layer changing, which minimizes work on image push.
2. Everything is pinned to nixpkgs, including dependencies. Anyone who builds this image will get the exact same versions (vs. apt-get update in a Dockerfile pulling a more recent version). It’s just sqlite in this case, but you can imagine how it would work with more dependencies.
3. It is trivial to build a “FROM scratch” image - in fact, that’s the default behavior in Nix, because Nix implicitly includes all runtime dependencies alongside your binary. This is less of a challenge with Go, but YMMV with other languages.
4. You can define your entrypoint script - or any other one-off script - in-line. Not a huge advantage, but still quite useful.
There is even an alternative pattern that allows you to reap these same benefits directly in your Dockerfile:
https://mitchellh.com/writing/nix-with-dockerfiles
Hope that helps.
The problem with docker is less the containerization and more the half-baked build system.
Huh? I use Nix to create containers. Nix is a programming language, a build tool, a package manager and an entire ecosystem of extremely powerful tools.
The entire reason why I use Nix in the first place is because it allows me to containerize with _better_ reproducibility than docker itself.
I do get where you're coming from though. It's not immediately clear that Nix can do all this stuff. Nix is a lot more than just "glorified weird package manager".
At its core, Nix is a way to specify dependencies in a mathematically sound manner. Once you have that pure dependency graph managed with Nix, you can start doing the _real_ fun stuff.
Like, you can containerize it. Or you can create a VM from it, or an ISO, or a NixOS distribution with _only_ that package installed.
Nix actually makes containerization _easier_, not harder. But yes, I empathize. Nix is a mess and it is difficult to understand, it will take a few more years before it is fully settled.
In the meantime? I'm going all-in on Nix (the philosophy, not necessarily any particular variant) because I really strongly believe this is the way forward.
> Nix is a programming language, a build tool, a package manager and an entire ecosystem of extremely powerful tools
You have identified part of the problem.
I agree. This _is_ a huge problem.
Nix and containerization aren't drop-in replacements for each other.
You can use Nix to build containers. Containers on their own don't guarantee reproducibility, especially if the build process isn't static and pure ( how many times do we `sudo apt update` inside a Dockerfile )?
And not everything is going to be containerizable. That only works for most applications. What if we're trying to manage our cloud servers? That's where Nix really shines.
Do you really think that Nix developers don't know how to containerize applications? You think people are using Nix because they refuse to learn how to containerize, and therefore opt to learn a _much more_ difficult and arcane build process? The logic doesn't track there.
Well, yeah.
Nix is attempting to be better than containerization.
Saying "improvements aren't necessary because we already have 'good-enough' technology" is a meaningful argument when the improvements aren't significant.
In my view, they are significant because Nix can be used to create a fully featured OS instead of just a VM.
> they are significant because Nix can be used to create a fully featured OS instead of just a VM
Look up Bootable Containers project by RedHat [0]. Fully featured OS built from a Containerfile, bootable on bare metal.
I agree that Nix design is much better than Docker, and has a bunch of features that OCI ecosystem doesn't (e.g. remote builds[1], partial downloading of the build tree, non-linear build process[2], nix store import/export, overlays, I/O isolation, much better composability), but "creating OS instead of VM" [did you mean container?] is not one of them.
[0] https://github.com/containers/bootc
[1] You can use DOCKER_HOST, and I'm happy that this option is there, but Nix does it better.
[2] Perhaps with BuildKit it's no longer true, I haven't checked what happens if you have multi-staged build with one stage depending on multiple previous ones (which are otherwise unconnected). I think Earthly can parallelize this scenario https://earthly.dev/
Yes buildkit can do this. You can also use buildkit to create a bootable VM, just that nobody is doing it. You can use estargz to fetch just the pieces you need from a dependency rather than the entire depdency as well. Really all of the things you mentioned should be possible with buildkit, just that the focus of most things is Dockerfile which has much more limited functionality (though some of the things mentioned above still apply to Dockerfile).
containers with build scripts are a bandaid over broken systems, it's better practice then having zero executable documentation on how to stand up a system, but it's also far from the best that could exist.
eelco thesis: 2003
lxc: 2008
docker: ~2012
no thank you.
I think that's a bit reductive, but I get the intent. A lot of people see systemic problems in their development and turn to tools to reduce the cognitive load, busywork, or just otherwise automate a solution. For example "we always argue over formatting" -> use an automated formatter. That makes total sense as long as managing/interacting with the tool is less work, not just different work.
With Nix I still think it's a net positive, but the "different kind of work" side of the equation is pretty large. That's why we're building Flox [1]. The imperative user interface of a package manager (flox install, flox search, etc) that builds out a declaratively-configured, reproducible, cross-platform developer environment. I really think it nails the user experience by keeping that "different work" side of the equation small, and (I hope) just gets out of your way.
[1]: https://flox.dev
I just started using Flox last weekend and so far it has been quite nice experience. There are two things I don't like, though:
1) The Homebrew package is a cask that installs also Nix. While I like Flox, I don't want my systems to be married to it. Yes, I know about install option with "generic Nix", but I'm using Homebrew with Brewfile both in macOS and Linux, and I would like the Homebrew package to be just Flox.
2) Documentation is OK for getting started, but not for anything more than that. There are nice manifest.toml examples for many use cases in floxenvs[1] but you need to find those first. Also I'm not sure how I feel about inline shell scripts in toml. While it works, separate files would be easier to handle, at least for me.
[1]: https://github.com/flox/floxenvs
I tend to agree. I also think both sides need to learn to better appreciate the other.
Without people getting shit done with the tools we’ve built, there would be no demand for better tools and no need to write them.
Without better tools, the things we can get done are limited. Better tooling is an exponent to our productivity. The things we can accomplish today would have been nearly unimaginable nearly thirty years ago.
> Better tooling is an exponent to our productivity
Nix & NixOS have made me much more productive. Maintaining a desktop is effortless. If something breaks, I can simply reboot to a previous installation. I can also try software without installing, and develop parallel projects relying on dependencies that would be mutually incompatible in a regular imperative package manager.
But I recognize that if you need to do something slightly unusual, documentation is incomplete and scattered. For simple things, my contrarian view is that Nix is not hard at all. The subset of Nix I use can be learned in a couple of hours. I think the trick is to avoid getting sucked into packaging complex software with messy build systems. If the stack you use is well packaged in NixPkgs, it's a joy to use. If it's not, it's better to stay away.
> The things we can accomplish today would have been nearly unimaginable nearly thirty years ago.
Like what? Witing an entire operating like Linux or Windows NT from scratch?
Well, no, that's still a Herculean effort. But look at all of those slick TODO apps we are now building non-stop!
Nix currently has the most packages of any distribution, see https://repology.org/
The model of having packages on Github with pull requests scales very well.
Therefore, you could argue that people are getting things done with Nix.
> Nix currently has the most packages of any distribution, see https://repology.org/
This is a meaningless point. Different distros split packages differently.
Nix's ecosystem really has exploded over the last couple of years. Even very obscure packages can be found, that are definitely AUR/PPA-territory in other distros.
At least, that's my personal experience.
No it’s very much meaningful. for every new ecosystem it’s immediately graded based on whether it has the critical mass of adoption. The build and release system is great, what I actually wish for is nixpkgs to offer debs and rpms to ensure that maintainers flock here
While I agree that it’s somewhat difficult to compare. Especially as node and python packages are separately packaged in Nix.
However it is clear that the Nix folks are quite productive with maintaining packages. So that statement is judgemental, as it implies that all those packages are not ‘the right things to get done’.
I first heard of Nix a few years ago from someone who fell firmly into the camp of "the tools are the interesting part." Despite my reservations, perhaps because I didn't want my opinion of the person to lead me away from something useful, I started to mess with it. After about 30 minutes I decided it was not for me and have not touched it since.
I do keep an eye on Nix-related stories to get a sense of whether or not I should change that stance. So far, nothing has led me to change it.
I'm 100% in the third camp.
I enjoy writing tools.
How do you feel about Kubernetes?
It would be good to have some interesting tasks to do?
I think the tools should do also much of the work. I actually prefer batch systems that are a simple execution of a program against a dump which are just process all the data and generate data with the new states than a networked online system that breaks all the time and due to DNS
Micro services keep me awake but a simple CSV processing I can fix in my own time.