kevincox

joined 3 years ago
MODERATOR OF
[–] kevincox@lemmy.ml 3 points 5 days ago

It honestly sounds more like someone convincing you that crypto is great than someone convincing you that Greenpeace is great.

[–] kevincox@lemmy.ml 11 points 1 week ago

I switched to Immich recently and am very happy.

  1. Immich's face detection is much better, very rarely fails. Especially for non-white faces. But even for white faces PhotoPrisim regularly needed me reviewing the unmatched faces. I also needed to really turn up the "what is a face" threshold because otherwise it would miss a ton of clear faces. (Then it only missed some, but also has tons of false positives). On the other hand Immich just works.
  2. Immich's UI is much nicer overall. Lots of small affordances. For example the menu item to "view in timeline" is worth switching alone. Also good riddance to PhotoPrism's persistent and buggy selection. Someone must have worked really hard on implementing this but it was really just a bad idea.
  3. Immich has an app with uploading, and it allows you to view local and uploaded photos in one interface which is a huge UX win. I couldn't find a good Android app for uploading to photoprism. You could set up import delays and stuff but you would still regularly get partially uploaded files imported and have to clean it up manually.
  4. Immich's search by content is much better. For example searching for "cat with red and yellow ball" was useless on PhotoPrism, but I found tons of the results I was looking for on Immich.

The bad:

  1. There is currently a terrible jank in the Immich app which makes videos unusable and everything painful. Apparently this is due to some Album sync process running in the main thread. They are working on it. I can't fathom how a few hundred albums causes this much lag but 🀷 There is also even worse lag on the location view page, but at least that is just one page.
  2. The Immich app has a lot less features than the website. But the website works very well on mobile so even just using the website (and the app for uploading) is better than PhotoPrism here. The fundamentals are good but it just needs more work.
  3. I liked PhotoPrism's advanced filters. They were very limited but at least they were there.
  4. Not being able to sort search results by date is a huge usability issue. I often know roughly when the photo I want to find was taken and being able to order by date would be hugely helpful.
  5. You have to eagerly transcode all videos. There is no way to clean up old transcodes and re-transcode on the fly. To be fair the PhotoPrism story also wasn't great because you had to wait for the full video to be transcoded before starting, leading to a huge delay for videos more than a few seconds, but at least I could save a few hundred gigs of disk space.

Honestly a lot of stuff in PhotoPrism feels like one developer has a weird workflow and they optimized it for that. Most of them are counter to what I actually want to do (like automatic title and description generation, or the review stuff, or auto quality rating). Immich is very clearly inspired by Google Photos and takes a lot of things directly from it, but that matches my use case way better. (I was pretty happy with Google Photos until they started refusing to give access to the originals.)

[–] kevincox@lemmy.ml 18 points 1 week ago

Most Intel GPUs are great at transcoding. Reliable, widely supported and quite a bit of transcoding power for very little electrical power.

I think the main thing I would check is what formats are supported. If the other GPU can support newer formats like AV1 it may be worth it (if you want to store your videos in these more efficient formats or you have clients who can consume these formats and will appreciate the reduced bandwidth).

But overall I would say if you aren't having any problems no need to bother. The onboard graphics are simple and efficient.

[–] kevincox@lemmy.ml 2 points 1 week ago

Yes. As this is a workstation the memory use is highly variable, >95% of the time I would probably barely notice having 32GiB. But other times it is a huge performance win to have that capacity available. Sometimes I am compiling lots of stuff and 32 compilers running + ample disk cache is very important. Other times I am processing lots of data and other times I am running a few VMs.

It is a bit of a luxury. I think if I was on a tighter budget I would have gone for 64GiB. However the price difference wasn't that much and at least a handful of times I have been quite happy to have that capacity available. And worst case I just have everything sitting in disk cache after a warm up which is a small performance win on every small task.

[–] kevincox@lemmy.ml 2 points 1 week ago (2 children)

I have enough disk space.

Plus my /tmp is a ramdisk and sometimes I compile large things in there (Firefox) so it is nice to let it be flushed out to disk if there are more important uses for that RAM than holding a file that most likely won't be read again.

[–] kevincox@lemmy.ml 2 points 1 week ago

There are three parts to the whole push system.

  1. A push protocol. You get a URL and post a message to it. That message is E2EE and gets delivered to the application.
  2. A way to acquire that URL.
  3. A way to respond to those notifications.

My point is that 1 is the core and already available across devices including over Google's push notification system and making custom push servers is very easy. It would make sense to keep that interface, but provide alternatives to 2 and 3. This way browsers can use the JS API for 2 and 3, but other apps can use a different API. The push server and the app server can remain identical across browsers, apps and anything else. This provides compatibility with the currently reigning system, the ability to provide tiny shims for people who don't want to self host and still maintains the option to fully self host as desired.

[–] kevincox@lemmy.ml 3 points 1 week ago (4 children)
% free -h
               total        used        free      shared  buff/cache   available
Mem:           125Gi        15Gi        90Gi       523Mi        22Gi       110Gi
Swap:           63Gi          0B        63Gi

I'll use it eventually. Just gotta let the disk cache warm up.

[–] kevincox@lemmy.ml 1 points 1 week ago

I don’t want the end executable to have to bundle these files and re-parse them each time it gets run.

No matter how you persist data you will need to re-parse it. The question is really just if the new format is more efficient to read than the old format. Some formats such as FlatBuffers and Cap'n Proto are designed to have very efficient loading processes.

(Well technically you could persist the process image to disk, but this tends to be much larger than serialized data would be and has issues such as defeating ASLR. This is very rarely done.)

Lots of people are talking about Pickle. But it isn't particularly fast. That being side with Python you can't expect much to start with.

[–] kevincox@lemmy.ml 1 points 1 week ago

Must be because Factorio released 2.0 and the Space Age DLC recently.

[–] kevincox@lemmy.ml 2 points 1 week ago (2 children)

IMHO UnifiedPush is just a poor re-implementation of WebPush which is an open and distributed standard that supports (and in the browser requires, so support is universal) E2EE.

UnifiedPush would be better as a framework for WebPush providers and a client API. But use the same protocol and backends as WebPush (as how to get a WebPush endpoint is defined as a JS API in browsers, would would need to be adapted).

[–] kevincox@lemmy.ml 9 points 1 week ago (1 children)

Why WASM? It seems to me that the attack surface of WASM is negligible compared to JavaScript (and IIUC disabling JavaScript will also disable WASM).

Third-party frames is definitely a good way to reduce your attack surface though. Ad embeds are often used to distribute exploits.

[–] kevincox@lemmy.ml 1 points 1 month ago

I paid for GPM for quite a while. I then started working at Google and beta tested YouTube Music from very early on and gave lots of feedback about how it sucked. When they shut down GPM I cancelled my YouTube Premium membership and installed an ad blocker. Not just YTM but so many things about YouTube were getting worse and worse and I couldn't find it in myself to keep paying for a service that kept removing features.

 

Is there any service that will speak LDAP but just respond with the local UNIX users?

Right now I have good management for local UNIX users but every service wants to do its own auth. This means that it is a pain of remembering different passwords, configuring passwords on setting up a new service and whatnot.

I noticed that a lot of services support LDAP auth, but I don't want to make my UNIX user accounts depend on LDAP for simplicity. So I was wondering if there was some sort of shim that will talk the LDAP protocol but just do authentication against the regular user database (PAM).

The closest I have seen is the services.openldap.declarativeContents NixOS option which I can probably use by transforming my regular UNIX settings into an LDAP config at build time, but I was wondering if there was anything simpler.

(Related note: I really wish that services would let you specify the user via HTTP header, then I could just manage auth at the reverse-proxy without worrying about bugs in the service)

 

I'm reconsidering my terminal emulator and was curious what everyone was using.

1
SaaS RSS hosting (www.rss-hosting.com)
1
submitted 2 years ago* (last edited 2 years ago) by kevincox@lemmy.ml to c/rss@lemmy.ml
 

I know the Email isn't everyone's favourite RSS reader but it works really well for me. I wasn't happy with any of the existing services so I started my own.

https://feedmail.org is a low-cost RSS-to-Email service with nice clean templates. I'm happy to answer any questions.

view more: next β€Ί