I scrape my own bank and financial aggregator to have a self hosted financial tool. I scrape my health insurance to pull in data to track for my HSA. I scrape Strava to build my own health reports.
chaospatterns
~~Can't be a passive adapter or else that would mean DisplayPort and HDMI have to protocol compatible. If they were then we wouldn't have this issue.~~ Apparently I was wrong.
Just an update. Firefox 146 just dropped with:
- Firefox now natively supports fractional scaled displays on Linux (Wayland), making rendering more effective.
After upgrading to 146 and natively using Wayland, it feels faster. Some fade animations are still choppier, but on average it's at least tolerable.
Thanks for the suggestions everyone.
Interesting. I played around with X11 vs Wayland settings just to see what different configurations give me
MOZ_ENABLE_WAYLAND=1 /snap/bin/firefox- Exhibits low FPS issueMOZ_ENABLE_WAYLAND=0 DISABLE_WAYLAND=1 /snap/bin/firefox- Actually feels fast like it should be. Most animations feel faster, some are still choppy though. It's hard to tell.
It seems like running with X11 sort of the problem? Which seems unexpected and concerns me since I know distros are starting to default to Wayland.
Yep, both are plugged into the graphics card. Other programs and games are a lot faster.
If the app is just a WebView wrapper around the application, then the challenge page would load and try to be evaluated.
If it's a native Android/iOS app, then it probably wouldn't work because the app would try to make HTTP API calls and get back something unexpected.
On Tor dark web domains, you use the .onion domain. Tor is configured as a SOCKS proxy, so it doesn't perform a DNS query. Instead, Tor itself sees you're trying to connect to an onion domain name. Then it takes the URL and translates that into a public key that it knows how to find in its own hidden service directory.
Only the actual hidden service has a valid private key corresponding to that public key in the URL so cryptography (and the assumption that quantum computers don't exist) ensures you're talking to the right server.
Tl;dr effectively no DNS for onion hidden services
Putting the charger circuit inside the battery takes away battery capacity, so I still buy the external chargers
Unless you're running VLANs, in which case the inter VLAN is normally handled by the router. I also expose my home lab services over BGP so all my traffic hits the router then comes back to my lab services.
Every WiFi router and network has something called an SSID and a BSSID. The SSID is the friendly name that you use to show off your puns to your neighbors. The BSSID is a 6 byte MAC address. All devices use the BSSID when connecting and communicating.
With a non hidden SSID, your router broadcasts the SSID and BSSID.
The BSSID doesn't change even if you change your SSID (Though APs with support for multiple SSID create a different BSSID per network) and it's what is actually used for geo location.
When it's hidden, it doesn't send the SSID out, but sends out packets with the BSSID. Clients then scream out to the void "anybody know the SSID 'My Secret SSID??'" Then it'll respond.
So basically hidden networks still send out the unique identifying address and then when you take your phone with you, you're just telling everybody what your home WiFi is called.
Hidden SSIDs are not that useful.
https://forum.syncthing.net/t/discontinuing-syncthing-android/23002
According to this post, it was partly that and lack of maintainers. Given there's maintainers for a fork, I'm curious why they didn't bring them into the main project.
Reason is a combination of Google making Play publishing something between hard and impossible and no active maintenance. The app saw no significant development for a long time and without Play releases I do no longer see enough benefit and/or have enough motivation to keep up the ongoing maintenance an app requires even without doing much, if any, changes.
I developed my own scraping system using browser automation frameworks. I also developed a secure storage mechanism to keep my data protected.
Yeah there is some security, but ultimately if they expose it to me via a username and password, I can use that same information to scrape it. Its helpful that I know my own credentials and have access to all 2FA mechanisms and am not brute forcing lots of logins so it looks normal.
Some providers protect it their websites with bot detection systems which are hard to bypass, but I've closed accounts with places that made it too difficult to do the analysis I need to do.