Spotlight7573

joined 1 year ago
[–] Spotlight7573@lemmy.world 28 points 3 weeks ago (2 children)

Isn't the main problem that most people don't use the E2E encrypted chat feature on Telegram, so most of what's going on is not actually private and Telegram does have the ability to moderate but refuses to (and also refuses to cooperate)?

Something like Signal gets around this by not having the technical ability to moderate (or any substantial data to hand over).

[–] Spotlight7573@lemmy.world 2 points 1 month ago

Before people can be persuaded to use them, we have to persuade or force the companies and sites to support them.

[–] Spotlight7573@lemmy.world 1 points 1 month ago

A multi-billion dollar social media company sued an ad industry group that was trying to have help companies have some kind of brand safety standards to prevent a company's ads from appearing next to objectionable content. They reportedly had two full-time staff members. This isn't some big win, it's bullying itself.

[–] Spotlight7573@lemmy.world 6 points 1 month ago

Basically with passkeys you have a public/private key pair that is generated for each account/each site and stored somewhere on your end somehow (on a hardware device, in a password manager, etc). When setting it up with the site you give your public key to the site so that they can recognize you in the future. When you want to prove that it's you, the website sends you a unique challenge message and asks you to sign it (a unique message to prevent replay attacks). There's some extra stuff in the spec regarding how the keys are stored or how the user is verified on the client side (such as having both access to the key and some kind of presence test or knowledge/biometric factor) but for the most part it's like certificates but easier.

[–] Spotlight7573@lemmy.world 6 points 1 month ago (1 children)

Don't most DoH resolversl settings have you enter the IP (for the actual lookup connection) along with the hostname of the DoH server (for cert validation for HTTPS)? Wouldn't this avoid the first lookup problem because there would be a certificate mismatch if they tried to intercept it?

[–] Spotlight7573@lemmy.world 108 points 1 month ago (22 children)

With a breach of this size, I think we're officially at the point where the data about enough people is out there and knowledge based questions for security should be considered unsafe. We need to come up with different authentication methods.

 

The Pro Codes Act has been submitted as an amendment to the "must pass" National Defense Authorization Act (NDAA). It allows copyrighted standards to be incorporated by reference into the law, preventing people from accessing or sharing these standards except through the systems the standards development organizations have that "makes all portions of the standard so incorporated publicly accessible online at no monetary cost and in a format that includes a searchable table of contents and index, or equivalent aids to facilitate the location of specific content. " Note that that does not include searchable text, the ability to access it without a login, or any ability to host it elsewhere (such as alongside the laws that incorporate it).

The NDAA bill:

https://rules.house.gov/bill/118/hr-8070

The amendment:

https://amendments-rules.house.gov/amendments/ISSA_180_xml240531155108634.pdf

[–] Spotlight7573@lemmy.world 20 points 1 month ago (1 children)

The plan was only to kill off third-party cookies, not first-party so being able to log into stuff (and stay logged in) was not going to be affected. Most other browsers have already blocked or limited third-party cookies but most other browsers aren't owned by a company that runs a dominant ad-tech business, so they can just make those changes without consulting anyone.

Also, it looks like there might be some kind of standard for federated login being worked on but I haven't really investigated it: https://developer.mozilla.org/en-US/docs/Web/API/FedCM_API

[–] Spotlight7573@lemmy.world 19 points 1 month ago

They definitely knew it would impact their ad business but I think what did it was the competition authorities saying they couldn't do it to their competitors either, even if they were willing to take the hit on their own services.

Impact on their business (bold added): https://support.google.com/admanager/answer/15189422

  • Programmatic revenue impact without Privacy Sandbox: By comparing the control 2 arm to the control 1 arm, we observed that removing third-party cookies without enabling Privacy Sandbox led to -34% programmatic revenue for publishers on Google Ad Manager and -21% programmatic revenue for publishers on Google AdSense.
  • Programmatic revenue impact with Privacy Sandbox: By comparing the treatment arm to control 1 arm, we observed that removing third-party cookies while enabling the Privacy Sandbox APIs led to -20% and -18% programmatic revenue for Google Ad Manager and Google AdSense publishers, respectively.
[–] Spotlight7573@lemmy.world 1 points 2 months ago (1 children)

For scenario one, they totally need to delete the data used for age verification after they collect it according to the law (unless another law says they have to keep it) and you can trust every company to follow the law.

For scenario two, that's where the age verification requirements of the law come in.

[–] Spotlight7573@lemmy.world 2 points 2 months ago (1 children)

No, no, no, it's super secure you see, they have this in the law too:

Information collected for the purpose of determining a covered user's age under paragraph (a) of subdivision one of this section shall not be used for any purpose other than age determination and shall be deleted immediately after an attempt to determine a covered user's age, except where necessary for compliance with any applicable provisions of New York state or federal law or regulation.

And they'll totally never be hacked.

[–] Spotlight7573@lemmy.world 1 points 2 months ago* (last edited 2 months ago) (3 children)

From the description of the bill law (bold added):

https://legislation.nysenate.gov/pdf/bills/2023/S7694A

To limit access to addictive feeds, this act will require social media companies to use commercially reasonable methods to determine user age. Regulations by the attorney general will provide guidance, but this flexible standard will be based on the totality of the circumstances, including the size, financial resources, and technical capabilities of a given social media company, and the costs and effectiveness of available age determination techniques for users of a given social media platform. For example, if a social media company is technically and financially capable of effectively determining the age of a user based on its existing data concerning that user, it may be commercially reasonable to present that as an age determination option to users. Although the legislature considered a statutory mandate for companies to respect automated browser or device signals whereby users can inform a covered operator that they are a covered minor, we determined that the attorney general would already have discretion to promulgate such a mandate through its rulemaking authority related to commercially reasonable and technologically feasible age determination methods. The legislature believes that such a mandate can be more effectively considered and tailored through that rulemaking process. Existing New York antidiscrimination laws and the attorney general's regulations will require, regardless, that social media companies provide a range of age verification methods all New Yorkers can use, and will not use age assurance methods that rely solely on biometrics or require government identification that many New Yorkers do not possess.

In other words: sites will have to figure it out and make sure that it's both effective and non-discriminatory, and the safe option would be for sites to treat everyone like children until proven otherwise.

[–] Spotlight7573@lemmy.world 38 points 3 months ago (1 children)

Doesn't necessarily need to be anyone with a lot of money, just a lot of people mass reporting things combined with automated systems.

 

the company says that Recall will be opt-in by default, so users will need to decide to turn it on

 

From the article:

Google must face a £13.6bn lawsuit alleging it has too much power over the online advertising market, a court has ruled.

The case, brought by a group called Ad Tech Collective Action LLP, alleges the search giant behaved in an anti-competitive way which caused online publishers in the UK to lose money.

And the actual case at the UK's Competition Appeal Tribunal:

https://www.catribunal.org.uk/cases/15727722-15827723-ad-tech-collective-action-llp

The claims by Ad Tech Collective Action LLP are for loss and damage allegedly caused by the Proposed Defendants’ breach of statutory duty by their infringement of section 18 of the Competition Act 1998 and Article 102 of the Treaty on the Functioning of the European Union. The PCR seeks to recover damages to compensate UK-domiciled publishers and publisher partners, for alleged harm in the form of lower revenues caused by the Proposed Defendants' conduct in the ad tech sector.

 

Upcoming Policy Changes

One of the major focal points of Version 1.5 requires that applicants seeking inclusion in the Chrome Root Store must support automated certificate issuance and management. [...] It’s important to note that these new requirements do not prohibit Chrome Root Store applicants from supporting “non-automated” methods of certificate issuance and renewal, nor require website operators to only rely on the automated solution(s) for certificate issuance and renewal. The intent behind this policy update is to make automated certificate issuance an option for a CA owner’s customers.

 

[...]

To provide better security, Google introduced an Enhanced Safe Browsing feature in 2020 that offers real-time protection from malicious sites you are visiting. It does this by checking in real-time against Google's cloud database to see if a site is malicious and should be blocked.

[...]

Google announced today that it is rolling out the Enhanced Safe Browsing feature to all Chrome users over the coming weeks without any way to go back to the legacy version.

The browser developer says it's doing this as the locally hosted Safe Browsing list is only updated every 30 to 60 minutes, but 60% of all phishing domains last only 10 minutes. This creates a significant time gap that leaves people are unprotected from new malicious URLs.

[...]

 

cross-posted from: https://lemmy.world/post/3301227

Chrome will be experimenting with defaulting to https:// if the site supports it, even when an http:// link is used and will warn about downloads from insecure sources for "high-risk files" (example given is an exe). They're also planning on enabling it by default for Incognito Mode and "sites that Chrome knows you typically access over HTTPS".

view more: next ›