clean_anion

joined 1 week ago
[–] clean_anion@programming.dev 4 points 3 hours ago* (last edited 3 hours ago)

I assume that trolls try to provoke erratic and disproportionate reactions from others, becoming a part of their own miniature sitcom for their own entertainment. It could be because of a sense of victory upon watching others break down (assuming a zero sum point of view). It could be the viewpoint that trolls are at their own higher level compared to others and understand each other while making fun of the lower levels (a false sense of superiority). Maybe it's a [case of] holding onto their own beliefs and assuming that they needn't change themselves if they disrupt all conversations that may cause harm to their own beliefs. It might be attention seeking or an escape mechanism. It could also be a desire to avoid fitting in with everyone else and remaining separate.

(edit: grammar)

[–] clean_anion@programming.dev 3 points 8 hours ago

There are some generic observations you can use to identify whether a story was AI generated or written by a human. However, there are no definitive criteria for identifying AI generated text except for text directed at the LLM user such as "certainly, here is a story that fits your criteria," or "as a large language model, I cannot..."

There are some signs that can be used to identify AI generated text though they might not always be accurate. For instance, the observation that AI tends to be superficial. It often puts undue emphasis on emotions that most humans would not focus on. It tends to be somewhat more ambiguous and abstract compared to humans.

A large language model often uses poetic language instead of factual (e.g., saying that something insignificant has "profound beauty"). It tends to focus too much on the overarching themes in the background even when not required (e.g., "this highlights the significance of xyz in revolutionizing the field of ...").

There are some grammatical traits that can be used to identify AI but they are even more ambiguous than judging the quality of the content, especially because someone might not be a native English speaker or they might be a native speaker whose natural grammar sounds like AI.

The only good methods of judging whether text was AI generated are judging the quality of the content (which one should do regardless of whether they want to use content quality to identify AI generated text) and looking for text directed at the AI user.

If I understand the model you proposed correctly, it basically consists of making a payment to someone (whether an instance or a central authority), obtaining tokens in exchange, giving tokens to a content creator, and the content creator exchanging them to get their money back.

Having a central authority wouldn't work because it goes against the principles of the Fediverse and most users would prefer that there not be a single point of failure. Having an instance exchange money for tokens wouldn't work because there is no scarcity of tokens and no guarantee that an instance honours a request.

This method could instead be replaced by content creators adding links to receive payments with people giving money to them directly.

[–] clean_anion@programming.dev 5 points 1 day ago (2 children)

The problem is that there is nothing meaningful you can exchange this currency for. The Fediverse is fundamentally designed to allow anyone to start a server. There is no meaningful way to reward someone with anything of value except the satisfaction of having helped grow the instance they are supporting. There is no good way to boost someone without manipulating the vote count or changing the protocol itself. Many apps already offer customizability while simultaneously being free as in free beer and free as in free speech. The main reason many people move to the Fediverse is to escape an internet where everything is "enshittified," and most Fediverse users wouldn't want to shift to a proprietary model.

[–] clean_anion@programming.dev 3 points 1 day ago (2 children)

Is there a specific "undress" button? I tried looking for proof that it exists but couldn't find any (my searching skills clearly need work). Could you please share a screenshot or point me to a place where I can confirm that it exists?

[–] clean_anion@programming.dev 3 points 4 days ago (1 children)

It's most likely an error with the nozzle height. The PEI plate not heating up enough shouldn't cause the adhesion in the photo above (and this is not a first layer problem, as the error is not at a uniform height). Additionally, a few lines are very faintly visible on the plate where they shouldn't be, indicating nozzle height. Make sure that it is easy to move a piece of paper between the nozzle and the PEI plate when adjusting the height, feeling only a very small amount of pressure as you do so.

[–] clean_anion@programming.dev 4 points 4 days ago

That data might be easily accessible, but that was a choice Root made. I think that it is a safe assumption that Root knew most vigilantes keep their identity secret and, assuming a German background, had read Section 202 of the StGB and other relevant laws and court rulings. As such, Root most likely did this despite knowing their identity is at risk. It is likely they did this publicly specifically to inspire others, though I haven't looked at all the details and there might be a different reason.

Nothing in this comment constitutes legal advice.

[–] clean_anion@programming.dev 18 points 6 days ago

Not all hierarchies are bad. For instance, in a judicial system, there need to be different tiers of courts as otherwise, if courts had universal authority and made conflicting decisions, it would complicate the law more so than it is already.

Similarly, in a large society that needs unity, if people make all decisions, the results would be catastrophic as most people don't have the time or energy to focus on every mundane decision. In such a case, elected representatives becomes mandatory, creating a hierarchy.

Yet another example is cases where fast decision-making is required (e.g., to respond to an emergency). In such a case, there needs to be a central authority who holds others responsible and coordinates response.

Ultimately, if you consider a hierarchy where accountability is possible i.e. one party may have more power over the second than the second over first but the second still has some power over the first, then it makes accountability possible in hierarchies. Hierarchies are only wrong when the power gap increases, a small power gap is alright provided it doesn't widen with time.

You could make the argument that a chain of accountability is better (X->Y->Z->X), but even such chains may include hierarchies (i.e. X itself is a hierarchy). Similarly, authority diffused among different people also suffers from potential shifting of blame. Truly neutral relations between different parties are impossible and ultimately, a power difference exists between any two parties, though it may be minute, and this power gap must be acknowledged.

In conclusion, there are a lot of disadvantages of hierarchies but there are some domains where hierarchies are good. There is no system of distribution of power that is without flaws.

[–] clean_anion@programming.dev 2 points 6 days ago

Enable Administrator password on the menu screen, create a persistent storage (if it doesn't already exist), download the Flatpak file from the website, and run

torify flatpak install /path/to/file
flatpak run io.github.softfever.OrcaSlicer

Using an AppImage is not a good idea because they have a tendency to give errors if proper software and version are not installed on Tails (on my Tails USB, this was because of GCC) unless you compile your own AppImage. Using Flatpak is better as it allows you to run software on your system even if the versions of GCC etc. are not up to date.

Please keep in mind that I have not confirmed whether this method is secure and would advise that you consider whether this is secure or not depending on your threat model.

[–] clean_anion@programming.dev 5 points 1 week ago (1 children)

TL;DR: not possible with random cookies, too much work for too little gain with already-verified cookies

There is no such add-on because random cookies will not work. Whenever someone has been authenticated, Google decides the cookie the browser should send out with any subsequent requests. Google can either choose to assign and store a session id on the browser and store data on servers or choose to store the client browser fingerprint and other data in a single cookie and sign this data.

Additionally, even with a verified session, if you change your browser fingerprint, it may trigger a CAPTCHA, despite using a verified cookie. In the case of a session token, this will occur because of the server storing the fingerprint associated with the previous request. On the other hand, if using a stateless method, the fingerprint will not match the signed data stored inside the cookie.

However, this could work with authenticated cookies wherein users contribute their cookies to a database and the database further distributes these cookies based on Proof of Work. This approach, too, has numerous flaws. For instance, this would require trusting the database, this is a very over engineered solution, Google doesn't mind asking verified users to verify again making this pointless, it would be more efficient to simply hire a team of people or use automated systems to solve CAPTCHAS, this approach also leaks a lot of data depending on your threat model, etc.

[–] clean_anion@programming.dev 10 points 1 week ago

ASCII was interpreted as UTF because the function that checked whether the given text was Unicode checked the difference between bytes at even and odd positions. Many of the common phrases used to trigger this were in the 4-3-3-5 format (by letters), e.g., Bush hid the facts However, there was never any reason that this format of character placement was necessary for the bug (though even length was necessary)

[–] clean_anion@programming.dev 7 points 1 week ago

Orca works great on Debian 13 for me (I installed it as a Flatpak)

view more: next ›