49 pointsby bzurakMar 16, 2026

11 Comments

mohamedkoubaaMar 19, 2026
The distributed object fallacy is never going away, is it?

https://martinfowler.com/articles/distributed-objects-micros...

ericreg92Mar 19, 2026
Whats curious to me here is that the github repo linked to in the article seems to exactly recreate the "operating system" that is called an "opinion in the wrong layer".

Perhaps I am misunderstanding this, but after looking at the code what exactly are we achieving here over other frameworks? The repo is obviously very new (and the author certainly seems busy), so perhaps a better question is what do we aim to achieve? So far it seems like the exact same pattern with some catchy naming.

Regardless, I love ambitious projects furiously coded by one crazy person. And I mean "crazy" in the best sense of the word, not as an insult. This is what open source is all about.

Please prove us all wrong. If you fail, you'll learn a ton!

lelanthranMar 19, 2026
> Regardless, I love ambitious projects furiously coded by one crazy person.

Me too; the world lost a treasure, once.

RIP Terry Davis. There was so much to be learned from your approach.

MomsAVoxellMar 19, 2026
I ask this question, but instead: I ask it of Lua.

As in, what if there was a Linux distro that focused, primarily, on building a Lua layer on top of everything, system-wise. Just replace all the standard stuff with one single, system-friendly language: Lua. C/C++ everything as it currently is: put Lua on top all the way to the Desktop.

It’s only a thought experiment, except there are cases where I can see a way to use it, and in fact have done it, only not to the desktop, such as of course embedded. Realtime data collection, processing and management. In this case, it is superlative to have a single system/app language on top of C/C++.

So I think there may be a point in the future where this ‘single language for everything’ becomes a mantra in distro land. I see the immense benefit.

Lua table state can be easily transferred, give or take a bit of resource management vis a vis sync’ing state and restoring it appropriately. Lua Bytecode, itself, in a properly defined manner can serve as a perfectly cromulant wire spec. Do it properly and nobody will ever know it isn’t just a plain ol’ C struct on an event handler, one of the fastest, except it’ll be very well abstracted to the application.

Oh, and if things are doing bytecode, may as well have playback and transactions intrinsically… it is all, after all, just a stream.

App state as gstreamer plugin? Within grasp, imho…

hnaxMar 19, 2026
100%
mikkupikkuMar 19, 2026
Do it. Build on the work of AwesomeWM probably, it's a Lua focused window manager that's quite nice. You can also build up less "minimalist" widgets and whatnot using Lua and claude code, which is very good at unconventional GUI work in Lua. I can attest to this specifically, I've had it build numerous GUIs with mpv Lua userscripts.
Lvl999NoobMar 19, 2026
Wouldn't you face the same problem as Dotnet on Windows? AFAIK, dotnet based frameworks and apps suffered from huge performance issues. It might have improved in recent times, I am not actually a windows dev.

If just the end user application is in Lua, then maybe it's fine and the high level language slowdown won't matter. If you want to wrap the low level kernel APIs etc in a high level language as the canonical interface, I would be very skeptical.

MomsAVoxellMar 19, 2026
I’m not sure the ‘language slowdown’ is as significant as one might think, given the common shared libs that would be in place with a one-size-fits-all solution, but its really all just a dream until someone does it, anyway.
interiorchurchMar 19, 2026
There used to be a roguelike game called Angband, which was written in C. There was a vibrant community around it, many of whom produced Angband variants by hacking the text config files and the C code. One developer got the idea of making most of the game scriptable in Lua, over a C core; which would, in theory, make even more people be able to hack at the game and produce variants.

What happened was the Angband community imploded, and the number of variants went way down.

I don't know if this is a generalizable example and there may have been other factors at work, but it is a cautionary tale.

Angband is still around btw, and is still excellent. But I believe it's C and text config files again now.

whatevaaMar 19, 2026
Arrays starts at one scared everybody away.
sporklMar 19, 2026
Sounds kind of like Arcan https://arcan-fe.com/about/
petcatMar 19, 2026
I don't have much experience with this kind of thing, but from here it looks like a program written this way would be nearly impossible to reason about performance when something as simple as a function call in your Python interpreter can have wild fluctuations in predictability just due to underlying network latency, remote host saturation, etc.

It seems like you would need an entire observability framework built and instrumented on top of this to ever really be that useful.

westurnerMar 19, 2026
cloudpickle serializes code without signatures; which is an RCE vuln.

It is much safer to distribute signed code in signed packages out of band and send only non-executable data in messages.

It is more safe to store distributed messages in a page with the NX bit flipped.

A compromise of any client in this system results in DOS and arbitrary RCE; but that's an issue with most distributed task worker systems.

To take a zero trust approach, you can't rely on the shared TLS key or the workers never being compromised.

mDNS doesn't scale beyond a broadcast segment without mDNS repeaters (which don't scale) which are on multiple network segments.

Something centralized like Dask, for example, can log and track state centrally to handle task retries on network, task, and worker failure.

But Dask doesn't satisfy zero trust design guidelines either.

How are these systems distinct from botnets with a shared TLS key and no certificate revocation?

westurnerMar 19, 2026
Basically, there's no good way to sidestep the authentication and authorization and resource quota controls of a resource grid scheduler.

With redundancy and idempotency, distributed computation can work.

In order to run computation distributedly with required keys, sharded distributed ledger protocol nodes cost the smart contracts that they execute in a virtual machine with network access limited to in-protocol messages. Each "smart contract" costs money to run redundantly on multiple nodes, and so it should or must have an account with a verifiably-sufficient balance in order to run.

Smart contracts must be uploaded using a private key with a corresponding public key.

Smart contracts are identified by and so are addressed by a hash of their bytecode.

Wool, Dask, and Celery @task don't solve for smart contract output storage costs with redundancy. They don't set up database(s) with replication large enough for each computation step for you. Dask and Celery model and track the state of each executed DAG centrally with logs as trustable as the centralized nodes which are a single point of failure.

Why isn't Docker Swarm - which L2 bridges amongst all nodes without restriction - appropriate for a given application, given the Access Control Lists and cloud (pod,) configuration necessary for e.g. AWS or GCP to prevent budget overflow? Why quota grid users somehow?

Serverless functions must be uploaded/deployed before being run, too. To orchestrate a bunch of web services is to execute the DAG and handle errors due to network, node, and service failures and latency

But then what protocol do the (serverless function) services all implement, so that we don't have to have a hodgepodge of API clients to use all the services in the grid?

With (serverless) functions bound to /URL routes, cost each function and estimate resource requirements to continue to run a function of that cost. To handle just benign resource exhaustion, say, Scale up the databases and/or redundant block storages, change other distributed computation grid parameters for that pod or those pods of resources which service a named, signed function like /api/v1/helloworld_v2?q=select%20* which is creating contention for the costed resources of the organization

On what signals do you scale up or scale down - within a resource budget - to afford fanning out over multiple actually parallel nodes to compute and sign and store the data?

extrMar 19, 2026
LLM generated article.
jjthebluntMar 19, 2026
I wonder if an LLM generated article would get the title to use proper English, though: "What if Python were natively distributable?".

It's possible LLMs pick up improper English, of course, since proper is some measure of what used to be a norm, but may presently be perceived as outdated.

blopkerMar 19, 2026
Every time I see something like this (turn function calls into a network call), I reflect fondly on the list of fallacies in distributed computing [0]. These are issues that largely have to be handled in an application-specific way, and cannot be solved in the general case.

This list alone has saved me many late debugging nights, just by not making or using systems that ignore the list.

[0]: https://en.wikipedia.org/wiki/Fallacies_of_distributed_compu...

rao-vMar 19, 2026
I generally agree but I value projects like this because there are smaller scale environments where many of these fallacies are perfectly fine working hypotheses. My home lab or a low volume, low 9s service etc.
hoshMar 19, 2026
Meanwhile, in BEAM land, this is a solved problem, and just watching different languages converge into towards the set of primitives and constraints. It is hard to replicate BEAM’s premptive scheduler, but I would not be surprised if someone will end up thinking they invented something novel by adding queues (mailboxes), or failing that, something like go channels.

And even then, once you have a workable set of primitives, it turns out that there are some orchestration patterns that recurs over and over again, so people will converge towards OTP once the primitives are there.

lelanthranMar 19, 2026
Pah - all these newcomers with interpreted/bytecode languages for remote functions!

It's easy to send text or bytecode to another instance of your runtime. I did a distributed system sending native code functions to be executed remotely.

Fair enough, it was for academic purposes, but it worked[1].

-------------------------------------

[1] By "worked" I mean got a passing grade on a thesis that got an IMDB number. Probably still got the actual PDF somewhere, or maybe it was indexed by google at some point.

kimiMar 19, 2026
Answering the question: it would be called "the Erlang VM", and you'd use Elixir to program it.

https://elixir-lang.readthedocs.io/en/latest/mix_otp/10.html

schmichaelMar 19, 2026
Absolutely wild to see none of the long lineage of similar attempts mentioned here. The earliest I could find with a quick search was Pyro which started in 1998 and still seems to be going: https://pyro4.readthedocs.io/en/stable/

RPyC came along in the aughts. There's a long history of "transparent clustering and rpc" efforts in Python that could be used or drawn on.

Sad to see that history ignored here.