makeasnek

joined 1 year ago
[–] makeasnek@lemmy.ml 21 points 9 months ago (13 children)

Nostr vs Mastodon on Privacy & Autonomy:

  • Relay/instance admins can choose which content goes through their relay on either platform
  • On nostr, your DMs are encrypted. In Mastodon, the admin of the sender and receiver can read them, as can anybody else who breaks into their server
  • On nostr, a relay admin can control what goes through their relay, but they can't stop you from following/DMing/being followed by whoever you want since you are typically connected to multiple relays at once. As long as one relay allows it, signal flows. Nostr provides the best of both worlds: moderated "public squares" according to your moderation preferences, autonomy to follow/dm/be followed by anybody you want (assuming that individual user hasn't blocked you).
  • On mastodon, your identity is tied to your instance. If your instance goes down, you lose your follow/followee list, DMs, etc. On Nostr, it's not, so this doesn't happen. Mastodon provides some functionality to migrate identity between instances but it's clunky and generally requires to have some form of advanced notice.
  • Both have all the same functions as twitter: tweet, reply, re-tweet, DM, like, etc.

Why I think nostr will win https://lemmy.ml/post/11570081

[–] makeasnek@lemmy.ml 18 points 9 months ago* (last edited 9 months ago) (2 children)

Nostr vs Mastodon on Privacy & Autonomy:

  • Relay/instance admins can choose which content goes through their relay on either platform
  • On nostr, your DMs are encrypted. In Mastodon, the admin of the sender and receiver can read them, as can anybody else who breaks into their server
  • On nostr, a relay admin can control what goes through their relay, but they can't stop you from following/DMing/being followed by whoever you want since you are typically connected to multiple relays at once. As long as one relay allows it, signal flows. Nostr provides the best of both worlds: moderated "public squares" according to your moderation preferences, autonomy to follow/dm/be followed by anybody you want (assuming that individual user hasn't blocked you).
  • On mastodon, your identity is tied to your instance. If your instance goes down, you lose your follow/followee list, DMs, etc. On Nostr, it's not, so this doesn't happen. Mastodon provides some functionality to migrate identity between instances but it's clunky and generally requires to have some form of advanced notice.
  • Both have all the same functions as twitter: tweet, reply, re-tweet, DM, like, etc.
[–] makeasnek@lemmy.ml -1 points 9 months ago* (last edited 9 months ago)

It can. Lightning transactions are as easy and lightweight to process as e-mail. They measure in the bytes or kb in size, no mining is required.

[–] makeasnek@lemmy.ml -3 points 9 months ago* (last edited 9 months ago) (2 children)

Except it's not. Lightning is incredibly decentralized, you can run a full lightning node on a raspberry pi. I have one running on my phone. Look up a graph of lightning network, looks just like any other decentralized system. Nodes you route through never have custody of your funds, unlike a bank.

[–] makeasnek@lemmy.ml 1 points 9 months ago

Ethereum uses proof-of-stake, there is no "mining" in a traditional sense, so its power consumption is more akin to e-mail than mining crypto. But proof-of-stake leads to centralization over time, which is antithetical to what Bitcoin people want.

[–] makeasnek@lemmy.ml -3 points 9 months ago (4 children)

It's been letting people be their own bank for 15 years. You can send transactions across the globe for pennies in fees which confirm instantly using Bitcoin lightning. The supply has remained capped at 21 million. It's doing exactly what it said it would do without a single hack or hour of downtime 24/7, 365.

[–] makeasnek@lemmy.ml 5 points 9 months ago (1 children)

"Not everybody will use it and it's not 100% perfect so let's not try"

[–] makeasnek@lemmy.ml 3 points 9 months ago (2 children)

Putting it on the blockchain ensures you can always go back and say "see, at this date/time, this key verified this file/hash".. If you know the key of the uploader (the white house), you can verify it was signed by that key. Guatemala used a similar scheme to verify votes in elections using Bitcoin. Could the precinct lie and put in the wrong vote count? Of course! But what it prevented was somebody saying "well actually the precinct reported a different number" since anybody could verify that on chain they didn't. It also prevented the precinct themselves from changing the number in the future if they were put under some kind of pressure.

[–] makeasnek@lemmy.ml 1 points 9 months ago* (last edited 9 months ago)

There is no "delete a user from the internet" button. It doesn't exist. Even if a single admin could ban a user from entire network, which is giving immense amount of power to any admin, all that user has to do is make a new account to get around it. That's true for Nostr, AP, Twitter, Facebook, E-mail, etc. This is why spam exists and will always exist. AP or nostr or whoever isn't going to solve spam or abuse of online services, the best we can do it mitigate the bulk of it. Relays and instances can share ban lists in nostr or AP, that can be automated, that is the way to mitigate the problem. There is, however, a "delete a person from society" button we can press, and that is LEOs job. That, conveniently, also deletes them from the internet. It's just not a button we trust anybody but government to press. We do have a "delete a user from most of AP/Nostr" button in the form of shared blocklists.

As we get stronger and stronger anti-spam/anti-abuse measures, we make it harder and harder to join and participate in networks like the internet. This isn't actually a problem for spammers, they have a financial incentive, so they can pay people to fill out captchas and do SMS verifications and whatever else they need to do. All we do by increasing the cost to spam is change that kinds of spam are profitable to send. Other abuse of services that isn't spam have their own intrinsic motivations that may outweigh the cost associated with making new accounts. At a certain level of anti-spam mitigation, you end up hurting end users more than spammers. A captcha and e-mail verification blocks like 90% of spam attempts and is a very small barrier for users. But even that has accessibility implications. Requiring them to receive an SMS? An additional 10% but now you've excluded people who don't have their own cell phone or use a VoIP provider. You've made it more dangerous for people to use your service to seek help for things like addiction, domestic abuse, etc as their partner or family member may share the same phone. You've made it harder to engage in dissent against the government in authoritarian regimes. You've also made it much more difficult to run a relay, since running a relay now requires access to an SMS service, payment for that SMS service, etc. Require them to receive a letter in the mail? An additional 10% but now you've excluded people who don't have a stable address or mail access, etc. Plus now it takes a week to sign up for your website and that's even getting into apartment numbers and the complications you'd face there. For a listing to be placed on Google Maps, maybe a letter in the mail is a reasonable hurdle to have, after all, Google only wants to list businesses which have a physical address. For posting to twitter? It's pretty ludicrous.

I generally trust relay admins to make moderation decisions, otherwise I wouldn't be on their instance or relay on the first place. And my trust becomes extended to other admins they work with and share ban lists with. And that's fine. But remember that any person with any set of motivations can be a relay or instance admin. That person could be the very troll we are trying to prevent with this anti-spam or anti-abuse measures. What I don't trust is any random person on the internet being able to make moderation decisions for the entire internet. Which means that any approach to bans would need to be federated and built on mutual trust between operators.

[–] makeasnek@lemmy.ml 1 points 9 months ago

Yes very true!

[–] makeasnek@lemmy.ml -2 points 9 months ago* (last edited 9 months ago)

Worth mentioning here that Lemmy itself accepts donations in Bitcoin directly and via OpenCollective. Many instances do as well. Bitcoin is free, federated, open source software and protocol for money, it kinda makes sense that there's some crossover there. https://join-lemmy.org/crypto

If you want a platform with built-in tipping, especially a federated, open-source one, you can't use PayPal, the fees make microtransactions impossible. Same with basically every other competitor out there. You either need to make your own payment processor (millions of dollars, massive yearly overhead, you have to handle dispute resolution, you need to forge independent relationships with Visa/MC/Amex/Plaid/etc, transactions all have different settlement times sometimes measured in weeks, it's an absolute bird's nest of problems. And that's just to do it for the US.). And each instance would have to have their own payment processor. It's a nightmare. Or, simple idea, you can just use some type of cryptocurrency.

You choice to avoid it is yours alone, but it seems like a weird thing to be mad about and avoid social networks on the basis of. Do you have such strong reactions to other assets like stocks? Or other currencies? Would you not use Facebook because users could use Turkish Lira on it to pay for extra photo storage? I don't love the Turkish government, but it seems like a weird place to draw a line in the sand over which social networks I'll use.

If you don't like the Bitcoin feature, you don't have to use it. Bitcoin has a market cap that puts it in the top 25 countries by GDP. Higher than Sweden. It's been doing its thing for 15 years. People may say they don't like it, but if you decide to not use any platform or service which accepts or uses Bitcoin, your circle of places you can use is going to continue to get smaller. Have fun not shopping at Safeway or any other major grocery store since they all have Bitcoin ATMs in the form of Coinstars. Have fun not using mutual funds or other investment portfolios from major banks or index funds since they all have a degree of exposure to Bitcoin. Have fun not using cash app or other major payment platforms which feature some kind of Bitcoin integration. Have fun not being able to use the DMV in colorado where you can renew your license with Bitcoin, and you won't be able to ride public transit in Argentina. Bitcoin is global and adoption grows year on year.

"Crypto" is full of scams and rug pulls and bad actors. But Bitcoin has kept its promises to faithfully relay transactions without a single hack or day of downtime for 15 years. They are not the same.

[–] makeasnek@lemmy.ml 1 points 9 months ago* (last edited 9 months ago) (2 children)

Before we get into the weeds here, let's start with an important basic premise: Moderation ability, at a protocol level, from an instance/relay admin perspective in nostr and AP is identical.

Are there moderation tools to propagate bans across relays quickly?

Relay operators can share ban lists like they do in AP. Relay operators can only directly control their own relay, not other relays. I don't know the ins-and-outs of how the interface on the admin side looks, but at a protocol level, AP and Nostr offer the same abilities.

Some users need to be booted off the network entirely and swiftly sometimes, we’ve seen several cases of this in Lemmy already with users posting horrendous shit. I’d be concerned that one of my relays would lag on banning (timezone differences for moderators or whatever innocuous reason) and these users achieve their goal of more people seeing the shit they post. For some people this might trigger PTSD, which is why I say it would be a huge barrier to mass adoption until that issue is resolved.

Relays sharing ban lists help can solve this problem. I would argue that we don't want to give that power (to ban a user from the entire network) to a single relay admin or even a couple relay admins (since anybody can be a relay admin), so broad consensus of some form needs to exist OR sets of relays can form their own little networks of trust where they will automatically trust a ban from other admins in that network. A relay admin doesn't need to be able to ban somebody from the entire network if they simply disagree with that user's post, they can just ban the user on their own relay. There is value in having public squares with varying degrees of moderation, among other reasons, because laws about what kind of speech are acceptable vary country by country. There is value in having mainstream platforms which refuse to host some kinds of content and having that be a different moderation policy than the one used by the government, for example. Remember that legality and morality are not the same and that there are differences in what is illegal vs illegal in different jurisdictions. We don't want the legal standards of Russia or China to the legal standards the entire network has to follow.

If the user is doing something which is very illegal, which I believe you are referring to, that is a job for law enforcement. Neutral networks like the internet are traditionally policed "at the edges". We don't have gmail proactively filtering for objectionable or illegal content because of the consequences that come from that privacy invasion, false positives, additional computational load, reducing reliability of sending/receive between email carriers, etc. Comcast is not inspecting packets as they fly through their network at a the speed of light, delaying them, and determining if they should be passed or not. It's the internet, they just pass them through. Instead, we say "this is an open, neutral network and if you break the law, LEO will deal with it".

view more: ‹ prev next ›