lodion

joined 2 years ago
MODERATOR OF
[–] lodion@aussie.zone 4 points 1 month ago

Probably both this and cloudflare caching. Looks like I set CF to cache for 1 year at some point, I can lower it... but that won't "fix" this, only limit the time its an issue for.

[–] lodion@aussie.zone 2 points 2 months ago (1 children)

His term has covered 4 months so far... Jan, Feb, Mar, Apr.

[–] lodion@aussie.zone 1 points 3 months ago

You're wrong, I'll leave it at that. Won't be replying any further.

[–] lodion@aussie.zone 2 points 3 months ago (2 children)

With the resources available its not feasible for AZ to develop/deploy custom solutions that can be resolved by remote instances with trivial configuration changes.

I'm not going to address specific parts of your post, suffice to say I disagree on almost everything you said.

As I said previously, if you have a workable solution please do devlop it and submit a PR to the lemmy devs. I'd be happy to try your suggestion should they roll it in.

[–] lodion@aussie.zone 4 points 3 months ago (4 children)

You're contradicting yourself there. By definition adding an external service is a customization to lemmy. I'm not interested in running un-vetted software from a third party.

This has been discussed previously with a request from a reputable source to batching content from LW. That setup required an additional server for AZ, close to LW. And for LW to send their outgoing federation traffic for AZ to it, which then batched and send to the real AZ server. This offer was declined, though appreciated.

I've been transparent and open about the situation. You seem to think this is the fault of AZ, and we're willfully not taking an action that we should be taking. This is not the case.

As it stands the issue is inherent with single threaded lemmy federation, which is why the devs added the option for multiple concurrent threads. Until LW enable this feature, we'll see delayed content from them when their activity volume is greater than what can be federated with a single thread. To imply this is the fault of the receiving instances is disingenuous at best, and deliberately misleading at worst.

[–] lodion@aussie.zone 2 points 3 months ago (6 children)

Note I said lemmy AND the activitypub protocol, ie lemmy does not currently have this capability. If it were added to mainline lemmy I'd be open to configuring it, but its not so I can't.

The root cause of the issue is well understood, the solution is available in lemmy already: multiple concurrent outgoing federation connections to remote instances. AZ has had this configured since it was available. LW have not yet enabled this, though they're now running a version that has it available.

Appreciate the offer, but I'm not interested in customising the AZ server configuration more than it already is. If you write it up and submit a PR that the main lemmy devs incorporate, I'd be happy to look at it.

[–] lodion@aussie.zone 4 points 3 months ago (9 children)

That isn't how lemmy and the activitypub protocol work. Source instance pushes metadata about new content, remote instance then needs to pull it. If we've not received the push yet, we can't pull the additional info.

[–] lodion@aussie.zone 3 points 10 months ago
[–] lodion@aussie.zone 3 points 10 months ago (2 children)

Sorry not interested in any hooks into Reddit, or additional software requiring ongoing management.

[–] lodion@aussie.zone 89 points 1 year ago (25 children)

It was removed deliberately during the reddit exodus in order to direct new Lemmy users elsewhere. Rather than to overload lemmy.ml further.

[–] lodion@aussie.zone 3 points 2 years ago

Hey I do exactly the same, high 5!

[–] lodion@aussie.zone 12 points 2 years ago (3 children)
 

Has anyone seen anything recently on an ETA for passkey support in Bitwarden? A recent blog post mentioned "summer"... but I'm in the southern hemisphere so am not entirely sure when this refers to.

0
submitted 2 years ago* (last edited 1 year ago) by lodion@aussie.zone to c/meta@aussie.zone
 

Changelog

2024.06.26 - upgraded lemmy to 0.19.5
2024.06.09 - upgraded lemmy to 0.19.4, postgres to 16, pict-rs to 0.5.15
2024.01.23 - upgraded lemmy to 0.19.3
2024.01.11 - upgraded lemmy to 0.19.2
2023.12.21 - upgraded lemmy to 0.19.1
2023.12.17 - upgraded lemmy to 0.19.0
2023.10.12 - upgrade VPS to 160GB, other specs unchanged
2023.08.9 - upgraded lemmy and lemmy-ui to 0.18.4
2023.07.29 - upgraded lemmy and lemmy-ui to 0.18.3
2023.07.11 - upgraded lemmy and lemmy-ui to 0.18.2
2023.07.10 - upgraded lemmy-ui to 0.18.2-rc.1 to mitigate XSS vulnerability
2023.07.10 - VPS upgraded to 8GB RAM (required to upgrade storage, needed anyway.. only $2pm)
2023.07.08 - upgraded lemmy and lemmy-ui 0.18.1 🎉
2023.07.06 - upgraded lemmy to 0.18.1-rc.10 and lemmy-ui 0.18.1-rc.11
2023.07.04 - upgraded lemmy-ui to 0.18.1-rc.10
2023.07.04 - upgraded lemmy and lemmy-ui to 0.18.1-rc.9
2023.07.03 - upgraded Lemmy to 0.18.1-rc.4 and lemmy-ui 0.18-rc.7
2023.06.28 - VPS upgraded from 2 to 4 vCPU, other specs the same.
2023.06.24 - upgraded Lemmy to 0.18.0
2023.06.23 - upgraded Lemmy to 0.18.0-rc.6
2023.06.13 - upgraded Lemmy to 0.17.4. Increased federation workers and reduced logging storage at the same time.
2023.06.10 - VPS storage upgraded from 40GB to 80GB, other specs the same.
2023.06.09 - VPS upgraded to 4GB RAM, 2 vCPU, other specs the same.
2023.06.08 - aussie.zone created, running Lemmy 0.17.3. OVH VPS was 2GB RAM, 1 vCPU, 40GB NVME

Nerd Stuff

The aussie.zone server is currently an OVH VPS in Sydney:
8GB RAM, 4 vCPU, 160GB NVME storage

Images are stored in an object store bucket on Wasabi, also in Sydney.

I post updates ~~every week or so~~ randomly with current server resource graphs:
Nerd update 20/4/24
Nerd update 2/9/23
Nerd update 13/8/23
Nerd update 5/8/23
Nerd update 29/7/23
Nerd update 22/7/23
Nerd update 15/7/23
Nerd update 7/7/23
Nerd update 30/6/23

If you have any questions, please post.

view more: next ›