Because it makes it SO easy to set up a database for your app that you end up with a super critical component of your application that looks exactly like a file. A file that can have any extension. And that file can be copied around to other servers. Even if there is PII in that file. Multiply this times the number of applications in your firm and you can see how this could get a little nuts.
DevOps and DBA teams would prefer that the database be a big, heavy iron thing that is very obviously a database server. And when you connect to it, that's also very obvious etc etc.
I still love SQLite though.
ai_slop_hater•May 7, 2026
That's so dumb
Fwirt•May 7, 2026
The question is, do the same firms ban Excel? Excel spreadsheets often end up as shadow databases in unlikely places.
hermitShell•May 7, 2026
The sane thing would be to ban Excel and promote SQLite. Excel is often used for tabulated text (issue tracking) not calculations. Perfect use case for a relational db
frollogaston•May 7, 2026
Excel is made for calculations. But if you make it hard to make a DB, people will abuse Excel as a DB.
TJSomething•May 7, 2026
I mean, it might have been at first, but Microsoft figured out that the majority of users for lists without formulas in 1993 and they've strategized around that. IMHO, the biggest concession to this was when they added Power Query to core Excel in 2016.
rswail•May 7, 2026
Excel has sheets for tables, columns and rows, primary keys (UNIQUE), foreign key references etc if you squint.
It doesn't require you use all of that properly, but it's there.
0123456789ABCDE•May 7, 2026
and excel has gui for forms
rantingdemon•May 7, 2026
Only where VBA is available. Not available for MacOs versions if I'm correct?
harvie•May 7, 2026
or reimplement excel with sqlite as a backend :-D
BTW sqlite can run SQL queries on CSV files with relatively simple one-liner command...
Spooky23•May 7, 2026
They generally cannot. But they do banish Access.
pasc1878•May 7, 2026
Now that is different.
Access gets used for a shared DB and that is quite easy to corrupt. It is much more cost effective to have that in a proper central database (I supse SQLLite is better here as well)
cwillu•May 7, 2026
Excel is also a shared DB: it has supported multiple concurrent users accessing and modifying the same spreadsheet for decades.
DeathArrow•May 7, 2026
Do companies ban text files? Text files are used to store data.
altmanaltman•May 7, 2026
Do companies ban brains? Brains are used to store data.
yard2010•May 7, 2026
Do companies ban data centers? It's crazy to send PII to other computers on the line.
silon42•May 7, 2026
IMO, almost any Excel more than a month old should become readonly.
irishcoffee•May 7, 2026
You should consider knock-on effects of this brilliant idea. now there will be copies of spreadsheets younger than a month that get replicated 47 billion times, exponentially compounding the problem you're trying to solve.
This sounds like how we pass so many stupid laws. Nobody thinks about 2nd order effects.
croon•May 7, 2026
This might catch flak, but generalizing I would assume that the people banning things are the same people who would use excel for something where a database would be better, and if so, that is the reason Excel isn't banned on the same conditionals that would get sqlite banned.
slopinthebag•May 7, 2026
> DevOps and DBA teams
Ah so two teams nobody should listen to.
frollogaston•May 7, 2026
At least would take it with a grain of salt when the DBA wants you to depend more on the DBA.
slopinthebag•May 7, 2026
Same with devops tbh.
"Hey everyone, we need to chose the option that involves us the most and provides us the most job security"
mschuster91•May 7, 2026
Well... eventually the company learns the lesson the hard way, either because a site goes down or gets 0wned. Then everyone will cry about "how this could happen", and the ops people will tell you in response "we warned you that this would happen, here's the receipts, now GTFO".
For public-sector data preservation, it may be one of the best options.
The specification is publicly available
- It is widely adopted
- It is likely to remain readable in the future
- It has little dependency on specific operating systems or services
- It carries low patent risk
From the perspective of long-term continuity, avoiding dependence on any particular company or service is extremely important.
Spooky23•May 7, 2026
Archivists also love formats close to native. SQLite lets the relational relationships be present in a way that csv cannot.
akihitot•May 7, 2026
That's certainly true. The ability to define table relationships is a major difference from CSV.
ray_v•May 7, 2026
It's so funny, because I was JUST telling a colleague of mine - another librarian - this exact fact about sqlite!
rmunn•May 7, 2026
> As of this writing (2018-05-29) ...
So this news is nearly <del>six</del> EIGHT years old. But I didn't happen to know about it until now, so that's not a complaint at all; rather, this is a thank-you for posting it.
(Thanks for the correction. Brief brain malfunction in the math department there).
tehlike•May 7, 2026
Sir, it's 2026. It's 8 years old.
rmunn•May 7, 2026
Corrected; thanks.
harrouet•May 7, 2026
Not if the GP was written 2 years ago :)
frollogaston•May 7, 2026
Was going to say, was having deja vu reading this
tombert•May 7, 2026
On a recent project I have needed to use exFAT. exFAT is terrible for a number of reasons, but in my case the thing I had to deal with was the lack of journaling, which had the possibility to corrupt files if there were a power interruption or something.
I initially was writing a series of files and doing some quasi-append-only things with new files and compacting the old one to sort of reinvent journaling. What I did more or less worked but it was very ad hoc and bad and was probably hiding a lot of bugs I would eventually have to fix later.
And then I remembered SQLite. I realized that ACID was probably safe enough for my needs, and then all the hard parts I was reinventing were probably faster and less likely to break if I used something thoroughly audited and tested, so I reworked everything I was doing to SQLite and it worked fine.
I wish exFAT would die in a fire and a journaling filesystem would replace it as the "one filesystem you can use everywhere", but until it does I'm grateful SQLite exists.
mmooss•May 7, 2026
> I wish exFAT would die in a fire and a journaling filesystem would replace it as the "one filesystem you can use everywhere"
Where exactly is everywhere? Win32? All of Linux? BSDs? MacOS? IOS? ...
ghrl•May 7, 2026
Something MacOS and Windows support natively would be a good start, it could grow from there.
Ringz•May 7, 2026
Looking at *all* my external drives now... that would be great.
tombert•May 7, 2026
Everywhere exFAT is supported now. Windows, Mac, Linux, FreeBSD would be fine.
pbhjpbhj•May 7, 2026
Presumably Microsoft fear making it easy to swap OSes and access the same data.
"I can use Linux because if I get stuck I can just switch to Windows and still access my data" is a comfort that probably keeps people from even trying Linux (or other OSes)?
Why else would MS not support BTRFS/ZFS/Ext or whatever?
{I'm not saying that I think this works.}
iknowstuff•May 7, 2026
> Why else would MS not support BTRFS/ZFS/Ext or whatever?
You seriously can’t think of another reason? File systems are complex. Maintenance is a huge burden. Getting them wrong is a liability. Reason enough to only support the bare minimum. And then, 99% of their users don’t care about any of those. NTFS is good enough
topham•May 7, 2026
The problem with it is you didn't solve your biggest actual problem, you just haven't had a problem bite you in the ass yet so you think your problem is solved.
tombert•May 7, 2026
I am not sure the problem is actually fully solvable. I think SQLite helps at least a little.
faangguyindia•May 7, 2026
I went from thinking “SQLite is a toy product, not reliable for real data" to "lets use SQLite for almost everything"
SQLite is very good if you can fit into the single writer, multiple readers pattern; you'll never lose data if you use the correct settings, which takes a minute of Google search to figure out.
Today, most of my apps are simply go binary + SQLite + systemd service file.
I've yet to lose data. Performance is great and plenty for most apps
michaelchisari•May 7, 2026
The single writer is less of an issue in practice than it's made out to be. Modern nvme drives are incredible and it's trivial to get 5k writes per second in an optimized WAL setup. Way more than most apps could ever dream.
And even then, I've used a batch writer pattern to get 180k writes per second on a commodity vps.
Ringz•May 7, 2026
I usually try to explain it like this: “Single writer” is rarely a real problem, because a writer is not slow. It writes exclusively, but very quickly.
"Batch writer pattern" is a good idea to get rid of expensive commits.
ex: main.db + fts.db. reading and writing to main.db is always available; updating the fts index can be done without blocking the main database — it only needs to read, the reads can be chunked, and delayed. fts.db keeps the index + a cursor table — an id or last change ts
could also use a shard to handle tables for metrics, or simply move old data out of main.db
* some examples:
conn = sqlite3.connect("data.db")
conn.execute("PRAGMA journal_mode=WAL") # concurrent reads (see above)
conn.execute("PRAGMA synchronous=NORMAL") # fsync at checkpoint, not every commit
conn.execute("PRAGMA cache_size=-62500") # ~61 MB page cache (negative = KB)
conn.execute("PRAGMA temp_store=MEMORY") # temp tables and indexes in RAM
conn.execute("PRAGMA busy_timeout=5000") # wait 5s on lock instead of failing
edit: orms will obliterate your performance — use raw queries instead. just make sure to run static analysis on your code base to catch sqli bugs.
my replies are being ratelimited, so let me add this
the heavy duty server other databases have is doing that load bearing work that folks tend to complain about sqlite can't do
the real dmbs's are doing mostly the same work that sqlite does, you just don't have to think about it once they're set up. behind that chunky server process the database is still dealing with writing your data to a filesystem, handling transaction locks, etc.
by default sqlite gives you a stable database file, that when you see the transaction complete, it means the changes have been committed to storage, and cannot be lost if the machine were to crash exactly after that.
you can decide to wave some, or all of those guaranties in exchange for performance, and this doesn't even have to be an all or nothing situation.
hparadiz•May 7, 2026
Oh fun something I have some metrics on. I just made this benchmark for every php orm a few weeks ago for fun.
There's a huge performance difference between memory and file storage within sqlite itself. Not even getting into tuning specifics.
afshinmeh•May 7, 2026
I love SQLite and thanks for sharing it but there should be a "(2018)" at the end in the title:
> As of this writing (2018-05-29) the only other recommended storage formats for datasets are XML, JSON, and CSV.
maxloh•May 7, 2026
FYI, they added a lot more formats to the list after that.
Preferred
1. Platform-independent, character-based formats are preferred over native or binary formats as long as data is complete, and retains full detail and precision. Preferred formats include well-developed, widely adopted, de facto marketplace standards, e.g.
a. Formats using well known schemas with public validation tool available
b. Line-oriented, e.g. TSV, CSV, fixed-width
c. Platform-independent open formats, e.g. .db, .db3, .sqlite, .sqlite3
2. Any proprietary format that is a de facto standard for a profession or supported by multiple tools (e.g. Excel .xls or .xlsx, Shapefile)
3. Character Encoding, in descending order of preference:
a. UTF-8, UTF-16 (with BOM),
b. US-ASCII or ISO 8859-1
c. Other named encoding
---
Acceptable
For data (in order of preference):
1. Non-proprietary, publicly documented formats endorsed as standards by a professional community or government agency, e.g. CDF, HDF
2. Text-based data formats with available schema
For aggregation or transfer:
1. ZIP, RAR, tar, 7z with no encryption, password or other protection mechanisms.
.7z being there just discredits the entire process. The underlying compression algorithm is a free-hand one and can be anything[0], or contain bugs and exploits[1]. Personally I use only zstd with .7z which is 'non-standard' by the official (Russian) release.
I love using zstd, it's so fast to decompress. I especially like that the JavaScript decoder is 8kb and still really fast. Though the 25kb wasm decoders are about twice as fast.
What are the advantages or reasons to use zstd in a 7z container versus just .zst?
tnelsond4•May 7, 2026
I'm always inspired by SQLite. Overall I like it, but if you're not doing writes it's really overkill.
So I made a format that will never surpass SQLite, except that it's extremely lighter and faster and works on zstd compressed files. It has really small indexes and can contain binaries or text just like SQLite.
The wasm part that decompresses and reads and searches the databases is only 38kb (uncompressed (maybe 16kb gzipped)). Compare that to SQLite's 1.2mb of wasm and glue code it's 3% the size but searching and loading is much faster. My program isn't really column based and isn't suitable for managing spreadsheets, but it's great for dictionaries and file archives of images and audio.
I ported the jbig2 decoder as a 17kb wasm module, so I can load monochrome scans that are 8kb per page and still legible.
Believe me, I tried sticking to SQLite or aard2 or stardict, they just were fundamentally inadequate with no good pwa cross platform tooling.
bbkane•May 7, 2026
Does this remain true now that SQLite has a WASM build?
tnelsond4•May 7, 2026
Yes, because originally when I started PeakSlab it used the SQLite wasm build.
lpln3452•May 7, 2026
Creating something new for a different use case isn't pointless. It's like comparing inline skates to ice skates.
keybored•May 7, 2026
Doesn’t even apply unless someone says that (1) there are too many “standards”, and (2) so we are making this standard (neither apply here). Someone made something.
We should really consider eventually retiring memes because they just end up as thought-terminating cliches.
This is of course referring to xkcd #927. How do I know that?
giza182•May 7, 2026
Perhaps a dumb question, but how do you get data into it if you’re not doing writes
tnelsond4•May 7, 2026
Generate it one time from a source tsv file or folder of media.
andrelaszlo•May 7, 2026
I think it's just immutable once you've generated it. No need to update indexes or check consistency on writes, no need for transactions, etc.
pfortuny•May 7, 2026
Think historical records of, say, share values for past years. You might have a single db for 1900-2000, for instance. Things like that.
Not everything needs to be real-time updated.
pjc50•May 7, 2026
I think actually this competes with the old BerkeleyDB: https://en.wikipedia.org/wiki/Berkeley_DB - which I now see is no longer BSD-licensed, and in any case has been rendered almost extinct by SQLite. It was used for basic on-disk key-value store work.
tnelsond4•May 7, 2026
Even BerkeleyDB tries to be mutable. What I'm doing doesn't need the mutability so it's much more similar to dictionary formats (though probably simpler) than it is to a database. Though a lot of people do use full databases for immutable dictionary key-value stuff. I just couldn't get any database to work well enough for a pwa dictionary.
meindnoch•May 7, 2026
It is crashing Safari.
testermelon•May 7, 2026
I'm surprised they included proprietary format that's de facto standard in profession or supported by multiple tools (.xls, .xlsx) in preferred section [1]. I wonder if "well-known enough" is as good as "open" from preservation standpoint.
You can unzip the xlsx and read the xml inside. It’s not the worst format by far.
mort96•May 7, 2026
Especially when Office 365 shows that not even Microsoft is capable of making software which can display Office files anymore... if you have a Word file which was created or has ever been modified by the Word application, working with it through Office 365 in a browser is such a pain. I've literally had images which are impossible to delete or move in the web version, and they will absolutely render in the wrong place.
guelo•May 7, 2026
I get annoyed at all the other DBs that require their own heavy duty server process when for 90% of my projects there is only one client, my app server. Is there a DB that combines sqlite's embedded simplicity with higher concurrent write throughput?
11 Comments
I have also heard that some firms ban its use.
Why?
Because it makes it SO easy to set up a database for your app that you end up with a super critical component of your application that looks exactly like a file. A file that can have any extension. And that file can be copied around to other servers. Even if there is PII in that file. Multiply this times the number of applications in your firm and you can see how this could get a little nuts.
DevOps and DBA teams would prefer that the database be a big, heavy iron thing that is very obviously a database server. And when you connect to it, that's also very obvious etc etc.
I still love SQLite though.
It doesn't require you use all of that properly, but it's there.
BTW sqlite can run SQL queries on CSV files with relatively simple one-liner command...
Access gets used for a shared DB and that is quite easy to corrupt. It is much more cost effective to have that in a proper central database (I supse SQLLite is better here as well)
This sounds like how we pass so many stupid laws. Nobody thinks about 2nd order effects.
Ah so two teams nobody should listen to.
"Hey everyone, we need to chose the option that involves us the most and provides us the most job security"
The specification is publicly available
- It is widely adopted - It is likely to remain readable in the future - It has little dependency on specific operating systems or services - It carries low patent risk
From the perspective of long-term continuity, avoiding dependence on any particular company or service is extremely important.
So this news is nearly <del>six</del> EIGHT years old. But I didn't happen to know about it until now, so that's not a complaint at all; rather, this is a thank-you for posting it.
(Thanks for the correction. Brief brain malfunction in the math department there).
I initially was writing a series of files and doing some quasi-append-only things with new files and compacting the old one to sort of reinvent journaling. What I did more or less worked but it was very ad hoc and bad and was probably hiding a lot of bugs I would eventually have to fix later.
And then I remembered SQLite. I realized that ACID was probably safe enough for my needs, and then all the hard parts I was reinventing were probably faster and less likely to break if I used something thoroughly audited and tested, so I reworked everything I was doing to SQLite and it worked fine.
I wish exFAT would die in a fire and a journaling filesystem would replace it as the "one filesystem you can use everywhere", but until it does I'm grateful SQLite exists.
Where exactly is everywhere? Win32? All of Linux? BSDs? MacOS? IOS? ...
"I can use Linux because if I get stuck I can just switch to Windows and still access my data" is a comfort that probably keeps people from even trying Linux (or other OSes)?
Why else would MS not support BTRFS/ZFS/Ext or whatever?
{I'm not saying that I think this works.}
You seriously can’t think of another reason? File systems are complex. Maintenance is a huge burden. Getting them wrong is a liability. Reason enough to only support the bare minimum. And then, 99% of their users don’t care about any of those. NTFS is good enough
SQLite is very good if you can fit into the single writer, multiple readers pattern; you'll never lose data if you use the correct settings, which takes a minute of Google search to figure out.
Today, most of my apps are simply go binary + SQLite + systemd service file.
I've yet to lose data. Performance is great and plenty for most apps
And even then, I've used a batch writer pattern to get 180k writes per second on a commodity vps.
"Batch writer pattern" is a good idea to get rid of expensive commits.
ex: main.db + fts.db. reading and writing to main.db is always available; updating the fts index can be done without blocking the main database — it only needs to read, the reads can be chunked, and delayed. fts.db keeps the index + a cursor table — an id or last change ts
could also use a shard to handle tables for metrics, or simply move old data out of main.db
* some examples:
edit: orms will obliterate your performance — use raw queries instead. just make sure to run static analysis on your code base to catch sqli bugs.my replies are being ratelimited, so let me add this
the heavy duty server other databases have is doing that load bearing work that folks tend to complain about sqlite can't do
the real dmbs's are doing mostly the same work that sqlite does, you just don't have to think about it once they're set up. behind that chunky server process the database is still dealing with writing your data to a filesystem, handling transaction locks, etc.
by default sqlite gives you a stable database file, that when you see the transaction complete, it means the changes have been committed to storage, and cannot be lost if the machine were to crash exactly after that.
you can decide to wave some, or all of those guaranties in exchange for performance, and this doesn't even have to be an all or nothing situation.
https://the-php-bench.technex.us/
There's a huge performance difference between memory and file storage within sqlite itself. Not even getting into tuning specifics.
> As of this writing (2018-05-29) the only other recommended storage formats for datasets are XML, JSON, and CSV.
[0]: https://7-zip.org/7z.html
[1]: CVE-2025-0411
What are the advantages or reasons to use zstd in a 7z container versus just .zst?
So I made a format that will never surpass SQLite, except that it's extremely lighter and faster and works on zstd compressed files. It has really small indexes and can contain binaries or text just like SQLite.
The wasm part that decompresses and reads and searches the databases is only 38kb (uncompressed (maybe 16kb gzipped)). Compare that to SQLite's 1.2mb of wasm and glue code it's 3% the size but searching and loading is much faster. My program isn't really column based and isn't suitable for managing spreadsheets, but it's great for dictionaries and file archives of images and audio.
I ported the jbig2 decoder as a 17kb wasm module, so I can load monochrome scans that are 8kb per page and still legible.
https://github.com/tnelsond/peakslab
SQLite is very well engineered, PeakSlab is very simple.
We should really consider eventually retiring memes because they just end up as thought-terminating cliches.
This is of course referring to xkcd #927. How do I know that?
Not everything needs to be real-time updated.
[1] https://www.loc.gov/preservation/resources/rfs/data.html