Asudox

joined 1 year ago
MODERATOR OF
[–] Asudox@lemmy.world 2 points 1 week ago (3 children)

Using ASCII in URLs is simple and is less error prone than "supporting" unicode via percent encoding. It is also just a convention to use ASCII for usernames in many platforms. ASCII is also supported out of the box in major OSes while some unicode characters might not. What about impersonation? And what about people trying to type in the username of someone that uses unicode? It is not logical to use unicode in this case.

[–] Asudox@lemmy.world 5 points 1 week ago

Canta uses Shizuku to delete user and systen apps. There is also a few categories like Recommended, Advanced, Unsafe, etc. Most of the apps also have comments on it by the dev (I suppose).

[–] Asudox@lemmy.world 4 points 1 week ago* (last edited 1 week ago) (1 children)

Because URLs are usually in ASCII. That was a standard. Check RFC 1738 and 3986. Now, you can use percent encoding, but why use that. It just complicates things.

[–] Asudox@lemmy.world 9 points 1 week ago (4 children)

You won't get non latin usernames anytime soon. But you can change the display name using non latin charactets

[–] Asudox@lemmy.world 20 points 1 week ago (2 children)

You can delete or disable the service that creates those stories. Download Shizuku and activate it, then use Canta to delete it.

[–] Asudox@lemmy.world 1 points 1 week ago

I never block anyone or any instance. I simply ignore them. I also use the subscribed feed, so what most of the people say about hexbear, lemmygrad, etc. content filling their feed also does not affect me.

[–] Asudox@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

That's what uBlock Origin planned to do if YouTube ever decided to etch ads into the video stream itself.

[–] Asudox@lemmy.world 10 points 1 week ago

I was just given a computer with unrestricted internet access and learnt it that way. Of course, the internet being unrestricted made me visit some questionable and illegal websites. Including CP and some hardcore NSFL using the tor browser. But I don't regret it (other than the last points).

[–] Asudox@lemmy.world 1 points 4 weeks ago

Yeah, tell them that when they were trying to deanonymize tor users

[–] Asudox@lemmy.world 1 points 4 weeks ago (2 children)

No. It's for privacy. If they don't support anonymous payments, there's literally no reason to host a .onion site just to fool people. I'd say that's a big red flag from a "privacy respecting" company.

[–] Asudox@lemmy.world -1 points 4 weeks ago (1 children)

I said:

I couldn't care less as long as the language is good.

Why wouldn't I care if the language is bad in my opinion?

[–] Asudox@lemmy.world -2 points 4 weeks ago* (last edited 4 weeks ago) (3 children)

Sure. It is open source, but the development is done by Apple engineers. I also would like to state that Go has trackers in it. I also don't really care what the creator of a language is. Homophobe, sexist, racist or other similar stuff, I couldn't care less as long as the language is good.

22
submitted 2 months ago* (last edited 2 months ago) by Asudox@lemmy.world to c/privacy@lemmy.ml
 

Some talk about the privacy of the digital euro has been made. Some people said that your transactions are going to be tracked. Should an european worry about it? Would GNU Taler be a possible solution?

And it's not like the digital euro is some dream, it will become reality soon.

59
submitted 3 months ago* (last edited 3 months ago) by Asudox@lemmy.world to c/linux@lemmy.ml
 

I've been using arch for a while now and I always used Flatpaks for proprietary software that might do some creepy shit because Flatpaks are supposed to be sandboxed (e.g. Steam). And Flatpaks always worked flawlessly OOTB for me. AUR for things I trust. I've read on the internet how people prefer AUR over Flatpaks. Why? And how do y'all cope with waiting for all the AUR installed packages to rebuild after every update? Alacritty takes ages to build for me. Which is why I only update the AUR installed and built applications every 2 weeks.

 

Hello Lemmings.

I will be attempting to make a federated anime tracker this summer, but I am not quite sure what features people would want and how I would get the details for animes, mangas, etc.

For the latter: What I thought was to either scrape other anime websites continuosly in the background, but this most likely is against the ToS of every anime tracking website, such as AniList or MAL. (I actually asked anidb.net for special access to their DB because apparently you can request access to it, but I've been left on read by the two staff members) My second idea was to make it an anime tracker website where animes are only user-submitted. And the user submissions would be approved by assigned moderators. However, I think this would be quite inconvenient. I'd like to get your opinions and/or ideas for this.

For the former: So if you have any requests or suggestions, please drop it down in the comments section.

Thanks in advance.

 

Sup, I've been working on this project for the past few months and now it's finally finished. It's a MyAnimeList-based anime recommendation system written in Rust. It's still being trained on the background as I've only started the training today in the morning, so tomorrow, the results you get can change.

You can try it out here: https://anote.asudox.dev

Please give feedback!

 

So I want to make a new project. It will have a website and an algorithm which will handle the requests. The thing is, web development in Rust feels harder than say in Go or Python. So I thought maybe I could somehow make bindings in Rust for Go since the faster the algorithm is, the better. However, that seems to complicate stuff as well. So do you think I should just rewrite the current algorithm in Go? Is it fast enough for it to not be a noticeable difference?

Edit: Thanks for the suggestions and advice! I decided to go with Rust for the website with Axum and the algorithm as well.

 

NOTE: Bot is currently down

cross-posted from: https://lemmy.world/post/11440349

I made this bot so that users who want to provide a quick summary of the wikipedia article they linked to in their comment can do so just by including a mention of the bot in their comment, and the bot will reply to the comment with the summary.

Currently multiple wikipedia links are not supported.

bot: https://lemmy.world/u/wikibot

 

is jerboa going to have better markdown support? For example, superscript and subscript markdown does not work.

view more: next ›