greyfox

joined 1 year ago
[–] greyfox@lemmy.world 30 points 1 week ago (2 children)

If you are accessing your files through dolphin on your Linux device this change has no effect on you. In that case Synology is just sharing files and it doesn't know or care what kind of files they are.

This change is mostly for people who were using the Synology videos app to stream videos. I assume Plex is much more common on Synology and I don't believe anything changed with Plex's h265 support.

If you were using the built in Synology videos app and have objections to Plex give Jellyfin a try. It should handle h265 and doesn't require a purchase like Plex does to unlock features like mobile apps.

Linux isn't dropping any codecs and should be able to handle almost any media you throw at it. Codec support depends on what app you are using, and most Linux apps use ffmpeg to do that decoding. As far as I know Debian hasn't dropped support for h265, but even if they did you could always compile your own ffmpeg libraries with it re-enabled.

How can I most easily search my NAS for files needing the removed codecs

The mediainfo command is one of the easiest ways to do this on the command line. It can tell you what video/audio codecs are used in a file.

With Linux and Synology DSM both dropping codecs, I am considering just taking the storage hit to convert to h.264 or another format. What would you recommend?

To answer this you need to know the least common denominator of supported codecs on everything you want to play back on. If you are only worried about playing this back on your Linux machine with your 1080s then you fully support h265 already and you should not convert anything. Any conversion between codecs is lossy so it is best to leave them as they are or else you will lose quality.

If you have other hardware that can't support h265, h264 is probably the next best. Almost any hardware in the last 15 years should easily handle h264.

When it comes to thumbnails for a remote filesystem like this are they generated and stored on my PC or will the PC save them to the folder on the NAS where other programs could use them.

Yes they are generated locally, and Dolphin stores them in ~/.cache/thumbnails on your local system.

[–] greyfox@lemmy.world 1 points 1 week ago

Add a -f to your umount and you can clear up those blocked processes. Sometimes you need to do it multiple times (seems like it maybe only unblocks one stuck process at a time).

When you mount your NFS share you can add the "soft" option which will let those stuck calls timeout on their own.

[–] greyfox@lemmy.world 1 points 1 week ago

North Dakota like many states has a renters refund for those with lower incomes which is designed to at least partially offset that. Limits look to be a bit low but every little bit helps.

[–] greyfox@lemmy.world 69 points 1 month ago (4 children)

It also doesn't say that the line on the bottom is straight, so we have no idea if that middle vertex adds up to 180 degrees. I would say it is unsolvable.

[–] greyfox@lemmy.world 2 points 1 month ago

I've got several full color Hue bulbs that are the most used lights in my house. I haven't had a single failure in a decade.

I was more than a little annoyed when they decided to stop supporting my original controller for them though.

[–] greyfox@lemmy.world 0 points 1 month ago (1 children)

Gerrymandered districts are more in danger not less. Gerrymandering is about spreading out areas which are easy wins, and instead spreading those votes over multiple districts.

You gain more seats, but you make every race closer.

[–] greyfox@lemmy.world 2 points 2 months ago

In any KDE app you can connect with SFTP in the open file dialog. Just type sftp://user@server/path and you can browse/open/edit files the remote server. ssh keys+agent make things a lot easier here obviously.

[–] greyfox@lemmy.world 1 points 3 months ago* (last edited 3 months ago)

This was a separate outage unrelated to CrowdStrike a few hours earlier that took down a couple of airlines as well.

A majority of the VMs in the Azure CentralUS datacenter went down due to some sort of backend storage issue.

Edit: I guess I should have read the article they do say CrowdStrike. They seem to be implying that they were one event when the cloud services outage was earlier and unrelated. I had heard about grounded flights during the first outage as well. So they likely are combining the two events here.

[–] greyfox@lemmy.world 3 points 3 months ago (1 children)

I would think most wifi jamming is just deauth attacks. It is much easier to just channel hop, enumerate clients, and send them deauthentication packets.

This way you don't need a particularly powerful radio/antenna, any laptop/hacking tool with Wi-Fi is all you need. There are scripts out there that automate the whole thing, so almost no deep knowledge of wifi protocols are required.

WPA3 has protected management frames to protect against this but most IoT cameras probably don't support WPA3 yet.

[–] greyfox@lemmy.world 5 points 3 months ago (1 children)

Updates for CrowdStike are pushed out automatically outside of any OS patching.

You can setup n-1/n-2 version policies to keep your production agent versions behind pre-prod, but other posts have mentioned that it got pushed out to all versions at once. Like a signature update vs an agent update that follows the policies.

[–] greyfox@lemmy.world 4 points 4 months ago

This is likely something like a FIDO token/passwordless setup of some sort (i.e. Windows Hello).

The thumbprint would just unlock the hardware device, so the thumbprint itself wouldn't need to be transmitted to your credit issuer. This gives you full two factor authentication of your identity because you need the hardware device (something you have) and your biometric (something you are). They also often allow pins (something you know) instead of biometrics as the second factor.

[–] greyfox@lemmy.world 2 points 5 months ago

I believe so. The package descriptions for most of the ZFS packages in Ubuntu mention OpenZFS, so it certainly appears that way.

You can still create pools that are compatible with Oracle Solaris, you just have to set the pool version to 28 or older when you create it and obviously don't update it. That will prevent you from using any of the newer features that have been added since the fork.

view more: next ›