Heard the first time about them (ente) yesterday in a discussion about "which 2FA are u using?". Directly switched to https://ente.com/auth/ on Android and Linux Desktop and very happy with it.
Going to give this a try...
_factor•Mar 25, 2026
You presumably had a working 2fa app already, but off the cuff decide to switch to new unvetted variant X; basically unknown auth system after reading a few paragraphs of text in an afternoon?
Does this seem sound?
ahofmann•Mar 25, 2026
While I would have the same reaction, in this case I think it is a sane decision. Ente is cornering the privacy market and I think they're doing a great job. They have a lot to lose (trust) and it would be stupid if they did something shady with the data entered in the 2FA app.
stonogo•Mar 25, 2026
> cornering the privacy market
this seems self-contradictory
ahofmann•Mar 25, 2026
Sorry, English is not my first language and I tried to look clever.
PurpleRamen•Mar 25, 2026
Not knowing them, how could OP trust them instantly? Whether they really have that trust or not, you have to know them for a while and from many different trustable sources. The story is a bit strange.
dotancohen•Mar 25, 2026
There are the issues of competence and track record, not only intent.
yolo_420•Mar 25, 2026
Ente is extremely well known in the privacy circles, so this is not just some random company with a random app out of nowhere.
Check PrivacyGuides for example.
testdelacc1•Mar 25, 2026
Ok I checked privacyguides.
Here’s where it was added to PrivacyGuides - https://github.com/privacyguides/privacyguides.org/issues/36.... The person opening the issue is the CEO of ente. So the CEO of ente gets his company mentioned in PrivacyGuides back when it was new and that makes it more legit?
yolo_420•Mar 25, 2026
PrivacyGuides goes through their own process of vetting (whether you would agree with their process or not that’s another topic) so I think the discussion to add Ente Photos is the more relevant link
https://discuss.privacyguides.net/t/ente-photo-management/11...
zaphod12•Mar 25, 2026
if it helps, I've used ente for a year and I really like it.
deltoidmaximus•Mar 25, 2026
I ended up picking them because they were the only open source one that worked on all my devices IIRC.
They just store tokens, without other FA at "worst" you get locked of your account but nobody else has access either. You're also supposed to, as good practice, not be limited to token generation and typically have a dozen or so of recovery tokens. Also if they were somewhat not working at doing the 1 task they should do, namely generate tokens, then you won't be able to use them so it won't even be added.
So... I might be missing something, can you please explain what worries you and why I should thus worry too?
testdelacc1•Mar 25, 2026
Not saying they’re a paid promoter. But if I paid someone to speak about my newly launched product, they’d say something exactly like that. “Never heard of these guys before, but I loved their other product you’ve never heard of. I’m super excited to try this one!”
mschulze•Mar 25, 2026
Oh, wow, thanks for posting that. I switched to Ente for my photos recently, had no idea they also have a 2FA app. I was looking for a replacement for Aegis (after a switch to iOS), and this can even import from Aegis backup files. Neat. This means I can finally ditch my old phone I still had to have around just for 2FA :)
glitchc•Mar 25, 2026
This sounds like an ad.
gwerbret•Mar 25, 2026
As do most of the associated comments. I think we're surrounded by bots.
mschulze•Mar 25, 2026
I'm not a bot. Check my comment history and account age.
yomismoaqui•Mar 25, 2026
You sure were when you posted those comments, but now, we cannot be sure...
So you look down you see a tortoise. It's crawling towards you.
mschulze•Mar 25, 2026
I mean I get it, astroturfing is a real problem and an annoying one for communities. But I also have no idea how to prove to you that I am neither a bot nor shilling here.
vaporwario•Mar 25, 2026
agreed. i have never seen anyone (let alone an assortment) of hacker news users saying "i switched my 2fa to this after seeing how great it was!" Not really sure how one 'switches their 2fa' to an LLM...
mschulze•Mar 25, 2026
This thread is about the 2FA app, not the LLM app. I don't care about the LLM app. What's this witch hunt? This app literally solved a (self-inflicted) problem I was having for some years now where I was keeping an old phone around just for MFA. I even thought about creating an iOS app that's compatible with Aegis files (actually I even _started_ working on that, but didn't get far) just to solve my problem. Now I don't have to, thanks to a comment here, and that's why I posted. Geez. I guess I'll stay with negative comments for the future, they seem to be more trustworthy.
NewsaHackO•Mar 25, 2026
Yea, everything about this post is just weird. IDK if they are even bots vs paid actors vs actual people who are clueless etc.
testdelacc1•Mar 25, 2026
For sure. Getting shady vibes from ente. I’ll be avoiding them.
Fervicus•Mar 25, 2026
I am not a bot and I am not associated with this company in any way. But I am a happy user of Ente Auth as well. This AI thing they made however just gives off "we have to do something with AI or we'll be left behind" vibes.
dmboyd•Mar 25, 2026
I was just thinking their end goal seems to be to harvest creds by putting their own rebadged distribution of local models. That’s the only “business” model that makes sense.
Expressly harvesting creds through a 2FA app seems a little more direct.
sylens•Mar 25, 2026
Ente offers E2EE photo hosting, the storage they sell through subscriptions to that is their business model. Their main selling point is that all machine learning to cluster faces is done on your devices. I would assume that they want more users to train their models on to improve their core offering
dotancohen•Mar 25, 2026
I'm very happy syncing between KeepassXC on Debian and Keepass2Android on mobile. It handles TOTP accoss devices.
What I'm missing is a way to create and use Passkeys across devices. My use case does not support creating a new Passkey on every device, I need to sync them via servers I control. The system that supports that will be the system that I migrate to.
moontear•Mar 25, 2026
And you can self-host the server if you want to! Running Ente Auth since quite a while now and am very happy with it.
VladVladikoff•Mar 25, 2026
Maybe I’m missing it but the page is really light on technical information. Is this a quantized / distilled model of a larger LLM? Which one? How many parameters? What quantization? What T/s can I expect? What are the VRAM requirements? Etc etc
hellcow•Mar 25, 2026
I tried it on my iPhone 13 mini. I believe the model you get changes depending on your phone specs. For me it downloaded a ~1.3GB model which can speak in complete sentences but can’t do much beyond that. Can’t blame them though—that model is tiny, and my device wasn’t designed for this.
Either LFM2.5-1.6B-4bit or Qwen3.5-2B-8bit or Qwen3.5-4B-4bit
embedding-shape•Mar 25, 2026
Huh, 1.6B/2B/4B models, I guess they weren't joking when they said "not as powerful as ChatGPT or Claude Code". Also unsure why they said "Claude Code", it's not an CLI agent AFAIK?
dgb23•Mar 25, 2026
This seems to be a general chat app, but otherwise small models can be very effective within the right use cases and orchestration.
embedding-shape•Mar 25, 2026
> otherwise small models can be very effective within the right use cases and orchestration
very limited amount of use cases, perhaps. As a generalized chat assistant? I'm not sure you'd be able to get anything of value out from them, but happy to be proven otherwise. I have all of those locally already, without fine-tuning, what use case could I try right now where any of those are "very effective"?
dgb23•Mar 25, 2026
Judging from my experimentation with local models:
You can use a small coding model to produce working code with a deterministic workflow (ex: state machine) if you carefully prune the context and filter down what it can do per iteration. Instead of letting it "reason" through an ever growing history, you give it distinct piecemeal steps with tailored context.
I think this can be generalized to:
Anything that can be built from small, well understood pieces and can be validated and fixed step by step. Then the challenge becomes designing these workflows and automating them.
(I'm not there yet, but one thing I have in mind might be a hybrid approach where the planning is produced by a more expensive model. The output it has to produce are data driven state machines or behavior trees (so they can be validated deterministically). Then it offloads the grunt work to a small, local model. When it's done, the work gets checked etc.)
Mashimo•Mar 25, 2026
> Also unsure why they said "Claude Code", it's not an CLI agent AFAIK?
Claude Code is a Desktop app as well.
yomismoaqui•Mar 25, 2026
The consfusing way AI companies like to name products is something to be studied.
lancekey•Mar 25, 2026
I don’t think so. IIRC the desktop app is called Claude and it has a code option in the UI.
Ok, but "Claude Code"/"Claude Desktop" regardless is software, a tool, not a model/LLM. Doesn't make much sense as they've written it.
Mashimo•Mar 25, 2026
For the end user who just installs the app it's probably all the same. It's not a technical document.
For the user it's just important that the small grimlin that sits in the Ente app is not as smart as the grimlin that sits in the Claude app.
dr_kiszonka•Mar 25, 2026
I so wanted to love Liquid AI's models, but despite their speed I was never able to get anything useful out of them. Even their larger models can't be trusted with simple stuff like inserting a column into a markdown table. The advertised tool calling is also not great. What I found interesting was that the ones I tried were a little light on guardrails.
I would really like to know what people use these small and tiny models for. If any high-karma users are reading it, would you consider posting Ask HN?
sync•Mar 25, 2026
Hmm, the Mac app downloaded gemma-3-4b-it-Q4_K_M.gguf for me (on an Apple M4) - maybe the desktop apps download different models?
Though, I don't see any references to Gemma at all in the open source code...
ahofmann•Mar 25, 2026
I have the same questions. After installing the app, it downloads 2.5 GB of data. I presume this is the model.
juliushuijnk•Mar 25, 2026
I'm working on a rather simple idea; a Wordpress plugin that allows you to use a local LLM inside your wordpress CMS.
The essence works, I was able to let it make a simple summary on CMS content. So next is making it do something useful, and making it clear how other plugins could use it.
Spam for what? This is hackernews, I'm "hacking something" to push more control to users.
I'm talking about connecting Ollama to your wordpress.
Not via MCP or something that's complicated for a relatively normal user. But thanks for the link.
juliushuijnk•Mar 25, 2026
It seems your link about the Wordpress variation validated my idea :).
If the new Wordpress feature would allow for connecting to Ollama, then there is no need anymore for my plugin. But I don't see that in the current documentation.
So for now, I see my solution being superior for anyone who doesn't have a paid subscription, but has a decent laptop, that would like to use an LLM 'for free' (apart from power usage) with 100% privacy on their website.
bilekas•Mar 25, 2026
> use a local LLM inside your wordpress CMS
For when wordpress doesn't have enough exploits and bugs as it is. Also why bother with wordpress in the first place if you're already having an LLM spit out content for you ?
juliushuijnk•Mar 25, 2026
What's your point? Don't use LLM for CMS content? That my code is buggy? Or that people shouldn't trust the LLM they run on their computer on their own website?
You can check the code for exploits yourself. And other than that it's just your LLM talking to your own website.
> Also why bother with wordpress in the first place
Weird question, but sure, I use WordPress, because I have a website that I want to run with a simple CMS that can also run my custom Wordpress plugins.
chocks•Mar 25, 2026
This looks amazing! As I learn and experiment more with local LLMs, I'm becoming more of a fan of local/offline LLMs. I believe there's a huge gap between local LLM based apps and commercial models like Claude/ChatGPT. Excited to see more apps leveraging local LLMs.
netfl0•Mar 25, 2026
Weird hype going on here in comments.
kylehotchkiss•Mar 25, 2026
Also weird arrogance. Yeah, somebody else could have vibe coded it in a week. But they didn't, so they don't get the 100 HN klout points for it.
Helping non-technical people get off of ChatGPT.com and using increasingly better local models seems worth celebrating and continued iteration.
nathan_compton•Mar 25, 2026
Please god stop letting LLMs write your copy. My brain just slides right over this slop. Perhaps you have a useful product but christ almighty I cannot countenance this boring machine generated text.
I just tried it. It downloaded Qwen3.5 2B on my phone and it's pretty coherent in its sentences, but really annoying with the amount of Ente products mentioned in every occasion.
Other than that it's fast enough to talk to and definitely an easy way to run a model locally on your phone.
sunaookami•Mar 25, 2026
>but really annoying with the amount of Ente products mentioned in every occasion
The (hn) title is misleading (unlike the actual title): It's an LLM _App_ not an LLM.
post-it•Mar 25, 2026
> This is not the beginning, nor is this the end. This is just a checkpoint.
Come onnnnnn. I would rather read a one line "Check out our offline llm" rather than a whole press release of slop.
This looks very neat. I'm not familiar with the nitty gritty of AI so I really don't understand how it can reply so quickly running on an iPhone 16. But I'm not even going to bother searching for details because I don't want to read slop.
maxloh•Mar 25, 2026
There is also another app called Off Grid, which lets you run any model from Hugging Face (of course you need to choose one your phone can handle).
How is this any different from Ollama plus Open Web UI?
kennywinker•Mar 25, 2026
None of that runs on an ios or android device.
jubilanti•Mar 25, 2026
There's dozens of local inference apps that basically wrap llama.cpp and someone else's GGUFs. The decentralized sync history part seems new? Not much else. But the advertisement copy is so insufferably annoying in how it presents this wrapper as a product.
Have a comparison chart to Ollama, LMStudio, LocalAI, Exo, Jan.AI, GPT4ALL, PocketPal, etc.
bee_rider•Mar 25, 2026
There are so many wrappers that are obviously wrappers. I wonder if part of the value proposition here is that it is “like a product.” I have no idea if they actually achieve that, though, and doubt it really could be proven on a site.
gardnr•Mar 25, 2026
oMLX is worth a look too if you are on a mac.
cdrnsf•Mar 25, 2026
I like Ente, but isn't their core product a photos application? Its offshoots like this and 2FA feel incongruous.
alterom•Mar 25, 2026
Their core product is local ML inference in the context of a photo app, i.e. a drop-in cloud dependency removal/replacement for non-technical, privacy-conscious users as well as those who want advanced functionality offline.
This does the same for language models.
FitchApps•Mar 25, 2026
Have you tried WebLLM? Or this wrapper: CodexLocal.com
Basically, you would have a rather simple but capable LLM right in your browser using WebLLM and GPU
emehex•Mar 25, 2026
There are literally 1000s of these types of apps. Why is this on the Front Page?
alterom•Mar 25, 2026
1000s?
I'd love to know a few more local LLM apps that are available on Android and iOS and Mac/PC under the same branding that I can point my non-technical friends to as a ChatGPT alternative that works offline (but still has sync across the devices).
Could you recommend a few?
lone-cloud•Mar 25, 2026
Any half capable engineer can vibe code this in a week. Who cares?
H8crilA•Mar 25, 2026
Someone still has to.
alterom•Mar 25, 2026
>Any half capable engineer can vibe code this in a week. Who cares?
The people who got tired of waiting for "any half capable engineer" to do so.
imadch•Mar 25, 2026
What do you mean by IA in your device ? is it a local LLM ? if yeas how much params 4B or 8B...?? device requirements not mentionned too
kennywinker•Mar 25, 2026
Looks like it checks your device specs and downloads whatever the best model that will work? On mine it’s using a 3.5b version of llama
jasongill•Mar 25, 2026
I love Ente Auth, but Ente (as a company/organization) does a somewhat poor job of calling out their non-photos apps in their branding and on their website. If you go to the "Download" button at the top of the page on this page about their LLM chat app, it downloads... their photo sharing application. If you click Sign Up, it takes you to a signup page with the browser title "Ente Photos" but the page text says "Private backups for your memories" with a picture of a lock - is that the Ente Auth signup, or the Ente Photos app signup?
A little bit of cleanup on their site to break out "Ente, our original photo sharing app" from the rest of their apps would do wonders, because I had to search around on the announcement to find the download for this app, which feels about like trying to find the popular Ente Auth app on their website
cromka•Mar 25, 2026
This is one of the reason I, as their user, am moving away. Things feel half-baked and I stopped trusting it with my data. I self host and the details for this were all over the place. Some of my contributions cleaning up the docs were rejected, so I grew even more wary. I am moving to Immich, even though I'd prefer e2ee for my photos even if I self host them.
tim-projects•Mar 25, 2026
This app isn't very useful but it did get me thinking.
I have a phone in a drawer I could install termux and ollama on over tailscale and then I'd have an always on llm for super light tasks.
I do really long for a private chat bot but I simply don't have access to the hardware required. Sadly I think it's going to be years to get there..
QubridAI•Mar 25, 2026
This is the most important part of local AI maturing not just better models, but better productization of on-device inference for normal people.
FusionX•Mar 25, 2026
Given how the blog is presented, I assumed this was something novel that solved a unique problem, maybe a local multi-modal assistant for your device.
I installed it and it's none of that. It is a mere wrapper around small local LLM models. And, it's not even multi-modal! Anyone could've one-shotted this in Claude in an hour (I'm not exaggerating).
What's the target audience here? Your average person doesn't care about the privacy value proposition (at least not by severely sacrificing chat model's quality). And users who do want that control can already install LMStudio/Llama.cpp (which is dead simple to setup).
The actual release product should've been what's described in "What's next" section.
> Instead of general chat, we shape Ensu to have a more specialized interface, say like a single, never-ending note you keep writing on, while the LLM offers suggestions, critiques, reminders, context, alternatives, viewpoints, quotes. A second brain, if you will.
> A more utilitarian take, say like an Android Launcher, where the LLM is an implementation detail behind an existing interaction that people are already used to.
> Your agent, running on your phone. No setup, no management, no manual backups. An LLM that grows with you, remembers you, your choices, manages your tasks, and has long-term memory and personality.
jubilanti•Mar 25, 2026
> Anyone could've one-shotted this in Claude in an hour (I'm not exaggerating).
This probably could have been one-shotted with Sonnet, not even Opus. Given how over indexed they are on LLM coding, Haiku might even be able to do it.
This is actually an interesting coding model benchmark task now that I think about it.
cyanydeez•Mar 25, 2026
You'd think any one of these great LLMs that claim coding is over would take a non-trivial app and just clone it while fully documenting the process.
If it's so great, why is there so little viscera documenting it's greatness? Just lots and lots of words.
post-it•Mar 25, 2026
> Anyone could've one-shotted this in Claude in an hour
I think they did. If you start the download and then open the sidebar and/or background the app, the download progress bar disappears and is replaced by the download button. If you press the download button again, the progress bar reappears at the correct point.
I find that Claude often makes little statefulness mistakes like that. Human developers do too, but the slower and more iterative nature of human development makes it more likely that that would get caught.
reactordev•Mar 25, 2026
It’s a platform play so they can get people onto their defunct photos platform. “Big Tech” in a little suit.
Barbing•Mar 25, 2026
All about immich now right?
buster•Mar 25, 2026
What do you when you say defunct photos platform?
fauigerzigerk•Mar 25, 2026
How is Ente Photos defunct? It's getting new features all the time and it works extremely well for me.
i80and•Mar 25, 2026
Defunct? I just switched to it a couple months ago, and it seems actively developed.
(Though I think this announcement is sufficiently unpleasant I'm starting to reconsider)
kylehotchkiss•Mar 25, 2026
Local LLM options for less technical people is worth celebrating IMO. No, not "anybody" can have one-shotted this in CC in an hour.
We have not seen a tidal wave of untechnical people vibe coding up their own software solutions.
collabs•Mar 25, 2026
To add to this, the value that ente or someone like that can bring to the table here is a firm pledge to improve it and maintain it going forward.
That to me is more valuable than code vibe coded by Claude in one afternoon.
jermaustin1•Mar 25, 2026
> We have not seen a tidal wave of untechnical people vibe coding up their own software solutions.
When my little brother who is a drummer, and has never even looked at "code" before, had claude on-shot a python app that let him download songs on youtube, extract the stems, collect tempo/key/etc information, then feed that into his DAW, all without ever looking at code, knowing what any of it did, etc., I knew that we were about to see a LOT of single-use applications.
I'm not against it, honestly. I have always written little one-off scripts and apps that accomplished something faster than manually, now that those one-shots are possible with an LLM in seconds sometimes, it makes all my personal scripts so much easier... that said, I definitely read the scripts that are output, and attempt to step through them in a debugger before assuming it is all good.
nozzlegear•Mar 25, 2026
edit: nevermind I misread a word and made a social faux pas
I do agree that more local LLM options are always better.
Barbing•Mar 25, 2026
>not
nozzlegear•Mar 25, 2026
Ope
Barbing•Mar 25, 2026
naux pas :)
ttul•Mar 25, 2026
I hate to say it, but this looks like the sort of thing a CEO told their team to build on Monday morning in a panic because they are grasping for ways to participate in the AI craze. And the team did just that: they built it that morning using Claude Code.
There is truly nothing original here and the product doesn't have a chance in hell of earning money. Local LLMs on-device will be dominated by the device vendors, whose control of the hardware stack combined with their ability to subsidize billions of dollars of machine learning research gives them an unfair advantage. Apple knows what the next generation of silicon will deliver, and their ML engineers are already hard at work building models that will be highly optimized for that silicon a year or two ahead of time. Open source models are really great and are backed by well funded labs; however, delivering these models on-device in a way that pleases users will never be easier than it is for the vendors of the devices.
Plus, device vendors have ways of making money from local LLMs that third-party app providers do not. They can make their local LLM free and earn money on the hardware play, without skipping a beat on the billions of dollars of ongoing R&D. I don't see how third party app vendors make money here when they will be competing with the decent, totally free alternative that Apple and Google (and Samsung etc.) will load on in the next year or two.
Barbing•Mar 25, 2026
Wanted to share a message here with the CEO not to feel too bad because little is more common than getting caught by this tech
I write my comment with admiration for founders, because I am one. That being said, chasing trends without paying attention to the steamroller has killed more than one very good company and I have plenty of scar tissue as evidence...
usrusr•Mar 25, 2026
Counter position (not sure it's better than yours): what are the chances that device makers would actually offer seriously local, and not just something that does work in airplane mode, but then still connects to their cloud later, if not for post-sale monetization then at least for features providing better brand lock-in? I mean just look at how well the market for TV sets that don't try to shove "services" down buyers' throats is developing...
But sure, making money with standalone "local first is our headline feature" will be incredibly hard against those, no doubt about that. In light of the limited quality of what local models can achieve, the privacy bonus just won't compel many to pay. But that only means that this "morning with Claude" you are suggesting might be just the right amount of investment for the result you'd realistically expect. But is that so bad? I'd argue the reverse: bundling up the low hanging fruit but not by some hobbyist who will lose interest two weeks on but by a company big enough the keep it going while small enough to not be a VC furnace that will inevitably turn on users once the runway runs out (*), that's an opportunity to fill a niche few others can. Valuable for users who don't want to roll their own deployment of open source models (can't, or unwilling to commit to keeping them up to date, assuming that Ente does keep that ball rolling), and also valuable for the company of the investment actually is so low that it pays by raising awareness for their other products that apparently do earn them money.
(*) I was googling around a little wondering if they actually are as close to bootstrapped as they seem on the surface, and yes, that's supposedly the core idea [0], but despite that they also took 100 kUSD in "non-diluting" (basically a gift then?) from Mozilla with the explicit goal "to promote independent AI and machine learning" [1]. So not a CEO whim but following up to a promise made earlier. If they actually did avoid spending all that money on a one-off but went smaller planning to keep it current for a longer time horizon, I'd congratulate them on an excellent choice.
The hn discussion for [1] seems to be completely missing the point, that Mozilla program isn't about funding an image host (yeah, I'd also prefer if Mozilla focused on the Browser and perhaps Thunderbird, but the foundation is what it is): https://news.ycombinator.com/item?id=41681666
moffkalast•Mar 25, 2026
Probably just another ollama-type service who wants to slide themselves in between the user and local models, so they can take all the credit and work on convenience based platform lock-in, then later introduce paid tiers.
BaudouinVH•Mar 25, 2026
Installed it on a not-so-young laptop. It crashes immediately after launch. I blame the laptop.
If Ente is reading this : please add requirements to make it run (how many RAM, etc.)
xtracto•Mar 25, 2026
I would love to see a "distributed LLM" system, where people can easily setup a system to perform a "piece" of a "mega model" inference or training. Kind of like SETI@home but for an open LLM (like https://github.com/evilsocket/cake but massive )
Ideally if you "participate" in the network, you would get "credits" to use it proportionally to how much GPU power you have provided to the network. Or if you can't, then buy credits (payment would be distributed as credits to other participants).
That way we could build huge LLMs that area really open and are not owned by any network.
I would LOVE to participate in building that as well.
For local LLM there are Ollama and LM Studio. How is this different?
kennywinker•Mar 25, 2026
This seems like a great question to ask an offline llm that runs on your mobile device. Does ollama and Lm studio run on your mobile device?
vvilliamperez•Mar 25, 2026
I just use open claw as a local memory management system. Not sure from TFA what's new here.
daikon899•Mar 25, 2026
The "What's next" section is more interesting than what shipped. A general-purpose chat wrapper around a 1-4B model occupies a crowded space — PocketPal, Jan, LMStudio, GPT4All all do similar things. But the ideas they gesture at (a persistent "second brain" note, an LLM-backed launcher, long-term memory that grows with you) are actually differentiated
pulkitsh1234•Mar 25, 2026
I am surprised to see this on HN front page, there is no new information here, just an ad.
getpokedagain•Mar 25, 2026
As someone who saw this and was interested but also skeptical of this being low effort are there other open projects for running small models locally on android / iOS?
Google's AI Edge Gallery worked really well for me. It's still in early access.
RandomGerm4n•Mar 25, 2026
I like the idea of having a user-friendly app that lets you use LLMs locally. Tools like Ollama and LMStudio tend to put most people off because you have to decide for yourself which models to use and there are so many settings to configure. If the hardware you’re using is compatible, Ensu could be a drop-in replacement for casual ChatGPT users.
However, it’s a bit confusing because, for example, a larger LLM model was downloaded to my smartphone than to my computer. It would probably make the most sense if the app simply categorized devices into five different tiers and then, depending on which performance tier a device falls into, downloaded the appropriate model and simply informed the user of the performance tier.
Over time, it would then be possible to periodically replace the LLM for each tier with better ones, or to redefine the device performance tiers based on hardware advancements.
drak0n1c•Mar 25, 2026
Osaurus is what I use for local - it's a native Swift macOS app with sandboxing, agent tooling, and server capabilities. Mac only though.
jimmyjazz14•Mar 25, 2026
LLMStudio is pretty darn easy and if I recall it recommends a model to install when you first start it
selfawareMammal•Mar 25, 2026
> People called us crazy.
Absolutely no one called them crazy.
razvan_maftei•Mar 25, 2026
Looks like something spun up by Claude Code without thorough testing or design behind it sadly.
tmanderson•Mar 25, 2026
beware of data rackets
socalgal2•Mar 25, 2026
What's special about Ente?
How does it compare to Jan AI for example? or LM Studio? or ????
alterom•Mar 25, 2026
It's available on Android and iOS, and specifically on Play Store and Apple Store (so no "developer mode" hoops to jump through to install).
buster•Mar 25, 2026
I don't understand all the hate about ente, to be honest.
Ente seems to try to solve the big tech lock in with their apps.
Personally, i'm a very happy Ente Photos user, so what's the problem with Ensu?
It's available on desktops and mobile, it's an app trying to give all a little bit more privacy and freedom and yet most comments are just hating on it.
If you can vibe code Ensu in a week end, please do. Make a better clone if you want to, but don't hate on someone for their work for stupid reasons.
alterom•Mar 25, 2026
I think the hate boils down to "I could've built it in a day with Claude, but didn't, so they suck" sour grapes.
When the comments here say "there's no value because anyone could've compiled llama.cpp", you can see how detached from reality these people are.
Even jumping through the hoops to get an app on Play Store and Apple Store — an app that I can tell my friends to look up and download — is worth a lot.
An app that is also is available on Mac and PC, mind you.
I'm an ex-Google/Meta/Microsoft/Roblox software engineer, and I couldn't be bothered to do any of that.
Neither could the rest of HN. But I'm not the one complaining about lack of novelty or value in this proposition.
todotask2•Mar 25, 2026
Model is last trained on Dec 2023, that's consider outdated.
georaa•Mar 25, 2026
Smart move building on local models. The privacy argument is
real but I think the bigger win is latency - no round trip
to an API means you can do things like inline suggestions
that would feel sluggish over the network.
38 Comments
Going to give this a try...
Does this seem sound?
this seems self-contradictory
Here’s where it was added to PrivacyGuides - https://github.com/privacyguides/privacyguides.org/issues/36.... The person opening the issue is the CEO of ente. So the CEO of ente gets his company mentioned in PrivacyGuides back when it was new and that makes it more legit?
https://en.wikipedia.org/wiki/Comparison_of_OTP_applications
They just store tokens, without other FA at "worst" you get locked of your account but nobody else has access either. You're also supposed to, as good practice, not be limited to token generation and typically have a dozen or so of recovery tokens. Also if they were somewhat not working at doing the 1 task they should do, namely generate tokens, then you won't be able to use them so it won't even be added.
So... I might be missing something, can you please explain what worries you and why I should thus worry too?
So you look down you see a tortoise. It's crawling towards you.
Expressly harvesting creds through a 2FA app seems a little more direct.
What I'm missing is a way to create and use Passkeys across devices. My use case does not support creating a new Passkey on every device, I need to sync them via servers I control. The system that supports that will be the system that I migrate to.
Either LFM2.5-1.6B-4bit or Qwen3.5-2B-8bit or Qwen3.5-4B-4bit
very limited amount of use cases, perhaps. As a generalized chat assistant? I'm not sure you'd be able to get anything of value out from them, but happy to be proven otherwise. I have all of those locally already, without fine-tuning, what use case could I try right now where any of those are "very effective"?
You can use a small coding model to produce working code with a deterministic workflow (ex: state machine) if you carefully prune the context and filter down what it can do per iteration. Instead of letting it "reason" through an ever growing history, you give it distinct piecemeal steps with tailored context.
I think this can be generalized to:
Anything that can be built from small, well understood pieces and can be validated and fixed step by step. Then the challenge becomes designing these workflows and automating them.
(I'm not there yet, but one thing I have in mind might be a hybrid approach where the planning is produced by a more expensive model. The output it has to produce are data driven state machines or behavior trees (so they can be validated deterministically). Then it offloads the grunt work to a small, local model. When it's done, the work gets checked etc.)
Claude Code is a Desktop app as well.
> Use Claude Code where you work
> Desktop Termianl IDE WEb and iOS Slack
Not that it is important any way ¯\_(ツ)_/¯
For the user it's just important that the small grimlin that sits in the Ente app is not as smart as the grimlin that sits in the Claude app.
I would really like to know what people use these small and tiny models for. If any high-karma users are reading it, would you consider posting Ask HN?
Though, I don't see any references to Gemma at all in the open source code...
It requires a Firefox add-on to act as a bridge: https://addons.mozilla.org/en-US/firefox/addon/ai-s-that-hel...
There is honestly not much to test just yet, but feel free to check it out here, provide feedback on the idea: https://codeberg.org/Helpalot/ais-that-helpalot
The essence works, I was able to let it make a simple summary on CMS content. So next is making it do something useful, and making it clear how other plugins could use it.
Also: "Your AI agent can now create, edit, and manage content on WordPress.com" https://wordpress.com/blog/2026/03/20/ai-agent-manage-conten...
I'm talking about connecting Ollama to your wordpress.
Not via MCP or something that's complicated for a relatively normal user. But thanks for the link.
If the new Wordpress feature would allow for connecting to Ollama, then there is no need anymore for my plugin. But I don't see that in the current documentation.
So for now, I see my solution being superior for anyone who doesn't have a paid subscription, but has a decent laptop, that would like to use an LLM 'for free' (apart from power usage) with 100% privacy on their website.
For when wordpress doesn't have enough exploits and bugs as it is. Also why bother with wordpress in the first place if you're already having an LLM spit out content for you ?
You can check the code for exploits yourself. And other than that it's just your LLM talking to your own website.
> Also why bother with wordpress in the first place
Weird question, but sure, I use WordPress, because I have a website that I want to run with a simple CMS that can also run my custom Wordpress plugins.
Helping non-technical people get off of ChatGPT.com and using increasingly better local models seems worth celebrating and continued iteration.
https://github.com/Arthur-Ficial/apfel
Apple Ai on the command line
https://github.com/ente-io/ente/blob/f254af939ff6950b63edf5f... Here is the system prompt, kinda embarassing
Then moved to pocket pal now for local llm.
Come onnnnnn. I would rather read a one line "Check out our offline llm" rather than a whole press release of slop.
This looks very neat. I'm not familiar with the nitty gritty of AI so I really don't understand how it can reply so quickly running on an iPhone 16. But I'm not even going to bother searching for details because I don't want to read slop.
https://github.com/alichherawalla/off-grid-mobile-ai
Have a comparison chart to Ollama, LMStudio, LocalAI, Exo, Jan.AI, GPT4ALL, PocketPal, etc.
This does the same for language models.
I'd love to know a few more local LLM apps that are available on Android and iOS and Mac/PC under the same branding that I can point my non-technical friends to as a ChatGPT alternative that works offline (but still has sync across the devices).
Could you recommend a few?
The people who got tired of waiting for "any half capable engineer" to do so.
A little bit of cleanup on their site to break out "Ente, our original photo sharing app" from the rest of their apps would do wonders, because I had to search around on the announcement to find the download for this app, which feels about like trying to find the popular Ente Auth app on their website
I have a phone in a drawer I could install termux and ollama on over tailscale and then I'd have an always on llm for super light tasks.
I do really long for a private chat bot but I simply don't have access to the hardware required. Sadly I think it's going to be years to get there..
I installed it and it's none of that. It is a mere wrapper around small local LLM models. And, it's not even multi-modal! Anyone could've one-shotted this in Claude in an hour (I'm not exaggerating).
What's the target audience here? Your average person doesn't care about the privacy value proposition (at least not by severely sacrificing chat model's quality). And users who do want that control can already install LMStudio/Llama.cpp (which is dead simple to setup).
The actual release product should've been what's described in "What's next" section.
> Instead of general chat, we shape Ensu to have a more specialized interface, say like a single, never-ending note you keep writing on, while the LLM offers suggestions, critiques, reminders, context, alternatives, viewpoints, quotes. A second brain, if you will.
> A more utilitarian take, say like an Android Launcher, where the LLM is an implementation detail behind an existing interaction that people are already used to.
> Your agent, running on your phone. No setup, no management, no manual backups. An LLM that grows with you, remembers you, your choices, manages your tasks, and has long-term memory and personality.
This probably could have been one-shotted with Sonnet, not even Opus. Given how over indexed they are on LLM coding, Haiku might even be able to do it.
This is actually an interesting coding model benchmark task now that I think about it.
If it's so great, why is there so little viscera documenting it's greatness? Just lots and lots of words.
I think they did. If you start the download and then open the sidebar and/or background the app, the download progress bar disappears and is replaced by the download button. If you press the download button again, the progress bar reappears at the correct point.
I find that Claude often makes little statefulness mistakes like that. Human developers do too, but the slower and more iterative nature of human development makes it more likely that that would get caught.
(Though I think this announcement is sufficiently unpleasant I'm starting to reconsider)
We have not seen a tidal wave of untechnical people vibe coding up their own software solutions.
That to me is more valuable than code vibe coded by Claude in one afternoon.
When my little brother who is a drummer, and has never even looked at "code" before, had claude on-shot a python app that let him download songs on youtube, extract the stems, collect tempo/key/etc information, then feed that into his DAW, all without ever looking at code, knowing what any of it did, etc., I knew that we were about to see a LOT of single-use applications.
I'm not against it, honestly. I have always written little one-off scripts and apps that accomplished something faster than manually, now that those one-shots are possible with an LLM in seconds sometimes, it makes all my personal scripts so much easier... that said, I definitely read the scripts that are output, and attempt to step through them in a debugger before assuming it is all good.
I do agree that more local LLM options are always better.
There is truly nothing original here and the product doesn't have a chance in hell of earning money. Local LLMs on-device will be dominated by the device vendors, whose control of the hardware stack combined with their ability to subsidize billions of dollars of machine learning research gives them an unfair advantage. Apple knows what the next generation of silicon will deliver, and their ML engineers are already hard at work building models that will be highly optimized for that silicon a year or two ahead of time. Open source models are really great and are backed by well funded labs; however, delivering these models on-device in a way that pleases users will never be easier than it is for the vendors of the devices.
Plus, device vendors have ways of making money from local LLMs that third-party app providers do not. They can make their local LLM free and earn money on the hardware play, without skipping a beat on the billions of dollars of ongoing R&D. I don't see how third party app vendors make money here when they will be competing with the decent, totally free alternative that Apple and Google (and Samsung etc.) will load on in the next year or two.
But where are they! https://ente.com/about
Small team, rooting for them
But sure, making money with standalone "local first is our headline feature" will be incredibly hard against those, no doubt about that. In light of the limited quality of what local models can achieve, the privacy bonus just won't compel many to pay. But that only means that this "morning with Claude" you are suggesting might be just the right amount of investment for the result you'd realistically expect. But is that so bad? I'd argue the reverse: bundling up the low hanging fruit but not by some hobbyist who will lose interest two weeks on but by a company big enough the keep it going while small enough to not be a VC furnace that will inevitably turn on users once the runway runs out (*), that's an opportunity to fill a niche few others can. Valuable for users who don't want to roll their own deployment of open source models (can't, or unwilling to commit to keeping them up to date, assuming that Ente does keep that ball rolling), and also valuable for the company of the investment actually is so low that it pays by raising awareness for their other products that apparently do earn them money.
(*) I was googling around a little wondering if they actually are as close to bootstrapped as they seem on the surface, and yes, that's supposedly the core idea [0], but despite that they also took 100 kUSD in "non-diluting" (basically a gift then?) from Mozilla with the explicit goal "to promote independent AI and machine learning" [1]. So not a CEO whim but following up to a promise made earlier. If they actually did avoid spending all that money on a one-off but went smaller planning to keep it current for a longer time horizon, I'd congratulate them on an excellent choice.
[0] https://ente.com/blog/5-years-of-ente/
[1] https://ente.io/blog/mozilla-builders/
The hn discussion for [1] seems to be completely missing the point, that Mozilla program isn't about funding an image host (yeah, I'd also prefer if Mozilla focused on the Browser and perhaps Thunderbird, but the foundation is what it is): https://news.ycombinator.com/item?id=41681666
If Ente is reading this : please add requirements to make it run (how many RAM, etc.)
Ideally if you "participate" in the network, you would get "credits" to use it proportionally to how much GPU power you have provided to the network. Or if you can't, then buy credits (payment would be distributed as credits to other participants).
That way we could build huge LLMs that area really open and are not owned by any network.
I would LOVE to participate in building that as well.
This was posted the other day, but only briefly made the front page - seems kinda like what you’re talking about
Although the ability to use large models "for free" sounds pretty rad.
I've found https://github.com/alichherawalla/off-grid-mobile-ai but haven't tried anything in this space yet.
However, it’s a bit confusing because, for example, a larger LLM model was downloaded to my smartphone than to my computer. It would probably make the most sense if the app simply categorized devices into five different tiers and then, depending on which performance tier a device falls into, downloaded the appropriate model and simply informed the user of the performance tier. Over time, it would then be possible to periodically replace the LLM for each tier with better ones, or to redefine the device performance tiers based on hardware advancements.
Absolutely no one called them crazy.
How does it compare to Jan AI for example? or LM Studio? or ????
When the comments here say "there's no value because anyone could've compiled llama.cpp", you can see how detached from reality these people are.
Even jumping through the hoops to get an app on Play Store and Apple Store — an app that I can tell my friends to look up and download — is worth a lot.
An app that is also is available on Mac and PC, mind you.
I'm an ex-Google/Meta/Microsoft/Roblox software engineer, and I couldn't be bothered to do any of that.
Neither could the rest of HN. But I'm not the one complaining about lack of novelty or value in this proposition.