h3ndrik

joined 1 year ago
[–] h3ndrik@feddit.de 3 points 4 months ago* (last edited 4 months ago) (2 children)

I'm not sure if ActivityPub allows for an extension like that. And I mean if you open up a separate direct channel via TURN... It'll be incompatible with something like Mastodon anyways, so I then don't see a good reason for why to bother with the additional overhead of AP in the first place. I mean you could then just send the status updates in some efficient binary representation as data packets directly do the other players. So why use ActivityPub that needs to encode that in some JSON, send it to your home instance, which handles it, puts it in the outbox, sends HTTP POST requests to the inboxes of your teammates where it then needs to be retrieved by them... In my eyes it's just a very complicated and inefficient way of transferring the data and I really don't see any benefits at all.

So instead of extending AP and wrapping the game state updates into AP messages, I'd just send them out directly and skip AP altogether. That probably reduces the program code needed to be written from like 20 pages to 2 and makes the data arrive nearly instantly.

I suppose I could imagine ActivityPub being part of other things in a game, though. Just not the core mechanics... For example it could do the account system. Or achievements or some collectibles which can then be commented and liked by other players.

[–] h3ndrik@feddit.de 1 points 4 months ago

Probably for European users if Europe decides to force gatekeeping platforms to implement such a feature.

[–] h3ndrik@feddit.de 3 points 4 months ago

Port forwards in the router + DynDns.

[–] h3ndrik@feddit.de 14 points 4 months ago* (last edited 4 months ago)

Though those leaks showed they actually did it on a large scale. I don't think they stopped for some arbitrary reason. Why would they? And technology developed further, surveillance is only getting easier. I'd say even without a tin-foil hat on, it's more likely they do it than not.

[–] h3ndrik@feddit.de 11 points 4 months ago* (last edited 4 months ago)

Well, centralization and giving up your freedoms, letting someone else control you, is always kinda easy. Same applies to all the other big tech companies and their platforms. I'd say it applies to other aspects of life, too.

And I'd say it's not far off from the usual setup. If you had a port forward and DynDns like lots of people have, the Dns would automatically update, you'd need to make sure the port forward is activated if you got a new router, but that's pretty much it.

But sure. if it's too inconvenient to put in the 5 minutes of effort it requires to set up port forwarding everytime you move, I also don't see an alternative to tunneling. Or you'd need to pay for a VPS.

[–] h3ndrik@feddit.de 1 points 4 months ago* (last edited 4 months ago)

Ah, nice. Alright. Thanks again. I'll see how I can do it. Unfortunately I've already set everything up, joined Rooms and connected a few bridges. I hope it doesn't break. I'll do a backup first. Seems reasonable and not that hard to upgrade.

[–] h3ndrik@feddit.de 1 points 4 months ago (2 children)

Oh well, seems both reasonable. Maybe I should switch before the projects diverge too much. Conduwuit seems pretty active. Hope it stays that way.

Do you happen to have a link where I can read the backstory myself? Thanks for the info anyways. Seems to be a good call.

[–] h3ndrik@feddit.de 2 points 4 months ago (4 children)

I found that. Seems it mainly addresses caching and database performance, adds some admin and moderation commands. I'm not sure if it addresses any of the shortcomings I have.

My main question is: Which one is going to be maintained in the years to come and have the latest features implemented? And secondly: Why a fork? Why don't they contribute their fixes upstream to Conduit?

[–] h3ndrik@feddit.de 1 points 4 months ago* (last edited 4 months ago)

Not possible. Almost all mailservers have migitated for this kind of thing. Even if you wrote a script, it wouldn't work on any properly configured mailserver..

[–] h3ndrik@feddit.de 2 points 4 months ago

Ah, well I only read the official documentation on https://docs.conduit.rs/

I'm gonna take a look at this later.

[–] h3ndrik@feddit.de 1 points 4 months ago

Depends a bit on how much images and videos get shared. If its mainly used for chat by a bunch of people and a few gifs and stickers in-between, it shouldn't consume that much storage. But sure if you frequently share all your vacation photos, the cache is going to grow fast.

[–] h3ndrik@feddit.de 1 points 4 months ago* (last edited 4 months ago) (2 children)

Definitely the whole server name. Other servers and clients can't guess that information. I think it's properly documented how to do it.

 

There are lots of projects that enable groups to organize themselves, gather ideas and organize documents.

Does anyone know any Free Software solution that is somewhat tailored to the needs of an action group? It should be easy to use, enable and invite people to participate and provide some means of collecting signatures for the cause. Ideally it'd also provide a Wiki for later, a contact forum and some means to organize and collaborate on ideas, brainstorming and schedule meetings.

I'd also like to hear about other solutions, even if they target something broader and I'd have to customize them. Or miss some features but I can combine them with other software. Most importantly it has to be easy to use and inviting, so people would like to participate.

 

Hey fellow users of the Fediverse, instance admins and platform developers. I'd like to see better means of handling adult content on the Fediverse. I'd like to spread a bit of awareness and request your opinions and comments.

Summary: A more nuanced concept is a necessary step towards creating a more inclusive and empowering space for adolescents while also catering to adult content. I suggest extending the platforms with additional tools for instance admins, content-labels and user roles. Further taking into account the way the Fediverse is designed, different jurisdictions and shifting the responsibility to the correct people. The concept of content-labels can also aid moderation in general.

The motivation:

We are currently disadvantaging adolescents and making life hard for instance admins. My main points:

  1. Our platforms shouldn't only cater to adults. I don't want to delve down into providing a kids-safe space, because that's different use-case and a complex task. But there's quite some space in-between. Young people also need places on the internet where they can connect, try themselves and slowly grow and approach the adult world. I think we should be inclusive and empower the age-group - lets say of 14-17 yo people. Currently we don't care. And I'd like that to change. It'd also help people who are parents, teachers and youth organizations.

  2. But the platform should also cater to adults. I'd like to be able to discuss adult topics. Since everything is mixed together... For example if I were to share my experience on adult stuff, it'd make me uncomfortable if I knew kids are probably reading that. That restricts me in what I can do here.

  3. Requirements by legislation: Numerous states and countries are exploring age verification requirements for the internet. Or it's already mandatory but can't be achieved with our current design.

  4. Big platforms and porn sites have means to circumvent that. Money and lawyers. It's considerably more difficult for our admins. I'm pretty sure they'd prosecute me at some point if I'd try to do the same. I don't see how I could legally run my own instance at all without overly restricting it with the current tools I have available.

Some laws and proposals

Why the Fediverse?

The Fediverse strives to be a nice space. A better place than just a copy of the established platforms including their issues. We should and can do better. We generally care for people and want to be inclusive. We should include adolecents and empower/support them, too.

I'd argue it's easy to do. The Fediverse provides some unique advantages. And currently the alternative is to lock down an instance, overblock and rigorously defederate. Which isn't great.

How?

There are a few design parameters:

  1. We don't want to restrict other users' use-cases in the process.
  2. The Fediverse connects people across very different jurisdictions. There is no one-size-fits-all solution.
  3. We can't tackle an impossibly big task. But that shouldn't keep up from doing anything. My suggestion is to not go for a perfect solution and fail in the process. But to implement something that is considerably better than the current situation. It doesn't need to be perfect and water-tight to be a big step in the right direction and be of some good benefit for all users.

With that in mind, my proposal is to extend the platforms to provide additional tools to the individual instance admins.

Due to (1) not restricting users, the default instance setting should be to allow all content. The status quo is unchanged, we only offer optional means to the instance admins to tie down the place if they deem appropriate. And this is a federated platform. We can have instances that cater to adults and some that also cater to young people in parallel. This would extend the Fediverse, not make it smaller.

Because of (2) the different jurisdictions, the responsibility has to be with the individual instance admins. They have to comply with their legislation, they know what is allowed and they probably also know what kind of users they like to target with their instance. So we just give a configurable solution to them without assuming or enforcing too much.

Age-verification is hard. Practically impossible. The responsibility for that has to be delegated and handled on an instance level. We should stick to attaching roles to users and have the individual instance deal with it, come up with a way how people attain these roles. Some suggestions: Pull the role "adult" from OAuth/LDAP. Give the role to all logged-in users. Have admins and moderators assign the roles.

The current solution for example implemented by LemmyNSFW is to preface the website with a popup "Are you 18?... Yes/No". I'd argue this is a joke and entirely ineffective. We can skip a workaround like that, as it doesn't comply with what is mandated in lots of countries. We're exactly as well off with or without that popup in my country. And it's redundant. We already have NSFW on the level of individual posts. And we can do better anyways. (Also: "NSFW" and "adult content" aren't the same thing.)

I think the current situation with LemmyNSFW, which is blocked by most big instances, showcases the current tools don't work properly. The situation as is leads to defederation.

Filtering and block-listing only works if people put in the effort and tag all the content. It's probably wishful thinking that this becomes the standard and happens to a level that is satisfactory. We probably also need allow-listing to compensate for that. Allow-list certain instances and communities that are known to only contain appropriate content. And also differentiate between communities that do a good job and are reliably providing content labels. Allow-listing would switch the filtering around and allow authorized (adult) users to bypass the list. There is an option to extend upon this at a later point to approach something like a safe space in certain scenarios. Whether this is for kids or adults who like safe-spaces.

Technical implementation:

  • Attach roles to user accounts so they can later be matched to content labels. (ActivityPub actors)
  • Attach labeling to individual messages. (ActivityPub objects)

This isn't necessarily a 1:1 relation. A simple "18+" category and a matching flag for the user account would be better than nothing. But legislation varies on what's appropriate. Ultimately I'd like to see more nuanced content categories and have the instance match which user group can access which content. A set of labels for content would also be useful for other moderation purposes. Currently we're just able to delete content or leave it there. But the same concept can also flag "fake-news" and "conspiracy theories" or "trolling" and make the user decide if they want to have that displayed to them. Currently this is up to the moderators, and they're just given 2 choices.

For the specific categories we can have a look at existing legislation. Some examples might include: "nudity", "pornography", "gambling", "extremism", "drugs", "self-harm", "hate", "gore", "malware/phishing". I'd like to refrain from vague categories such as "offensive language". That just leads to further complications when applying it. Categories should be somewhat uncontroversial, comprehensible to the average moderator and cross some threshold appropriate to this task.

These categories need to be a well-defined set to be useful. And the admins need a tool to map them to user roles (age groups). I'd go ahead and also allow the users to filter out categories on top, in case they don't like hate, trolling and such, they can choose to filter it out. And moderators also get another tool in addition to the ban hammer for more nuanced content moderation.

  • Instance settings should include: Show all content, (Blur/spoiler content,) Restrict content for non-logged-in users. Hide content entirely from the instance. And the user-group <-> content-flag mappings.

  • Add the handling of user-groups and the mapping to content-labels to the admin interface.

  • Add the content-labels to the UI so the users can flag their content.

  • Add the content-labels to the moderation tools

  • Implement allow-listing of instances and communities in a separate task/milestone.

  • We should anticipate age-verification getting mandatory in more and more places. Other software projects might pick up on it or need to implement it, too. This solution should tie into that. Make it extensible. I'd like to pull user groups from SSO, OAuth, OIDC, LDAP or whatever provides user roles and is supported as an authentication/authorization backend.

Caveats:

  • It's a voluntary effort. People might not participate enough to make it useful. If most content doesn't include the appropriate labels, block-listing might prove ineffective. That remains to be seen. Maybe we need to implement allow-listing first.
  • There will be some dispute, categories are a simplification and people have different judgment on exact boundaries. I think this proposal tries to compensate for some of this and tries not to oversimplify things. Also I believe most of society roughly agrees on enough of the underlying ethics.
  • Filtering content isn't great and can be abused. But it is a necessary tool if we want something like this.

🅭🄍 This text is licensed “No Rights Reserved”, CC0 1.0: This work has been marked as dedicated to the public domain.

 

So, I got into NixOS and installed it on a VPS a few days ago. I've previously used yunohost.org (a debian based all-in-one selfhosting solution) and docker-compose. But I (now) really like the Nix(OS) approach, the amount of packaged software and how everything ties together in a clean server configuration.

However... I need a bit more information on the server stuff. Are there nice configurations around which I can incorporate and learn from? Extensive tutorials from other people who run their own services or communities?

I mean the basic stuff isn't a problem. I got Nextcloud and the most important stuff running, a DNS Adblocker, a chat server, nginx etc. But ultimately I'd like to share some services with friends and family. So I need single sign-on (SSO), preferably with an LDAP directory. An email server... And the Wiki and just googling it stop being helpful at this point.

Are there people who share their experience with LDAP/Authentik/Zitadel/Authelia/Keycloak / whatever SSO/Authentication software is packaged in Nix but I can't find anything about from people who actually use it? A comparison of the several available email servers?

view more: next ›