2
Basically I need read-only access via browsers to a large folder structure on my NAS, but I want:
- the ability to somewhat rapidly search all files in the hierarchy based on their filename. Metadata not required for my use case, just the file names. It's totally fine if there's an initial indexing phase, but some sort of fsnotify-based "keep it up to date" function would be nice.
- a very simple preview viewer, sort of like most cloud sharing sites (Dropbox has one for example), where various media files can be viewed/streamed in-browser. No need for any transcoding or anything like that - if the browser can't play the codecs, it's fine for it not to work. A download link is of course a good idea.
- Ideally configurable - show previewer for files matching these mimetypes/extensions/etc., default to download otherwise.
- decent design - nginx's indexes suck, they cut off filenames that are even moderately long. Doesn't have to be top-tier design team level stuff, but something with basic bootstrap/material would be much better than the ugly indexes.
- (ideally) direct access - i.e. https://mynas.local/server-app/media/tv/ should open the app to that folder. It's fine if that requires web server/proxy server support to do URL rewriting or whatever.
- alternatively, https://mynas.local/server-app?path=/media/tv would be fine, and wouldn't reqire any interaction with the web server.
- use the web server's functionality for actually sending the files themselves - i.e. an app that opens the file, reads it into RAM and then sends it via the socket is far less efficient than the web server which can use sendfile. (If you're writing an app, this can usually be done with a header in the response) This also ensures support for ranges and other stuff that web servers can provide to clients.
- Read only is fine and ideal. If uploading is possible, should have some form of authentication required. (No auth engine needed for the read-only side, if anything I can configure my reverse proxy to add that.)
- something that can run in Docker. This is not a very tall order these days though. :)
What I don't need (if it's there it's fine but I don't have a need for it)
- creating sharing links
- transcoding during streaming
- user accounts
- extreme levels of customizability (styling with custom CSS = fine)
- upload support
- "gallery" views (simple list with no thumbnails is fine, even in folders with images/videos/music)
- metadata/content search - simple string search based on filenames is fine, imagine "find . > list.txt" followed by "cat list.txt | grep 'search_term'"
Right now I'm just using good old nginx's indexes, but they leave much much much to be desired as I've already commented on. I've started trying to build this idea multiple times and have a handful of very, very incomplete iterations, but I've never had the time to get it over the finish line. Plus I kinda suck at frontend web dev.
I wonder how they do this. Are the drives even SAS/NVMe/some standard interface, or are they fully proprietary? What "logic" is being done on the controller/backplane vs. in the drive itself?
If they have moved significant amounts of logic such as bad block management and such to the backplane, it's an interesting further example of "full circle" in the tech industry. (e.g. we started out using terminals, then went to locally running software, and now we're slowly moving back towards hosted software via web apps/VDI.) I see no practical reason to do this other than (theoretically) reducing manufacturing costs and (definitely) pushing vendor lock-in. Not like we haven't seen that sorta stuff done with e.g. NetApp messing with firmware on drives though.
However if they just mean that the 29TB disks are SAS drives and the enclosure firmware implements some sort of proprietary filesystem and that the disks are only officially supported in their enclosure, but the disk could operate on its own as just a big 29TB drive, we could in theory get these drives used and stick them in any NAS running ZFS or similar. (I'm reminded of how they originally pitched the small 16/32GB Optanes as "accelerators" and for a short time people weren't sure if you could just use them as tiny NVMe SSDs - turned out you could. I have a Linux box that uses an Optane 16GB as a boot/log/cache drive and it works beautifully. Similarly those 800GB "Oracle accelerators" are just SSDs, one of them is my VM store in my VM box.)