Elephant0991

joined 1 year ago
 

Comment

I hope nobody loses their shirt over this.

Summary

  • Sensitive data exposed: Internal code, infrastructure diagrams, passwords, and other technical information were publicly accessible on GitHub for months.
  • Source unclear: Unclear if an outside hacker or Binance employee accidentally uploaded the data.
  • Potential risk: Information could be used by attackers to compromise Binance systems, though Binance claims "negligible risk".
  • Data details: Included code related to passwords and multi-factor authentication, diagrams of internal infrastructure, and apparent production system passwords.
  • Binance response: Initially downplayed the leak, later acknowledged data was theirs but downplayed risk.
  • Current status: Data removed from GitHub via copyright takedown request.
  • Unclear if any malicious actors accessed the data.
 

Main findings:

  • The seasons, especially winter, can affect our mood, memory, concentration, social behavior, and sex drive.
  • Shorter daylight hours are linked to winter blues and Seasonal Affective Disorder (SAD), characterized by low mood, sleep issues, and energy loss.
  • Reduced light disrupts our circadian rhythm, impacting mood and cognitive function.
  • Vitamin D deficiency (from less sunlight) might also contribute to winter blues and cognitive decline.
  • We may subconsciously seek warmth and social connection during colder months, explaining increased interest in romance films and social activities.
  • Sexual activity fluctuates across seasons, potentially due to the desire for physical and emotional warmth.

Key takeaways:

  • Embrace positive aspects of winter like its beauty and coziness to improve mood.
  • Cognitive behavioural therapy can help managing negative thoughts and boost winter well-being.
  • Don't be hard on yourself if you're forgetful or less social during winter; it's natural.
  • Actively seek social connection and engage in activities you enjoy to combat winter blues.
  • 2019 Cochrane Systematic Review concluded that the evidence for light therapy's effectiveness as a preventative treatment was limited.
[–] Elephant0991@lemmy.bleh.au 3 points 8 months ago

Probably got some parachute built in.

 

Key Points:

  • Security and privacy concerns: Increased use of AI systems raises issues like data manipulation, model vulnerabilities, and information leaks.
  • Threats at various stages: Training data, software, and deployment are all vulnerable to attacks like poisoning, data breaches, and prompt injection.
  • Attacks with broad impact: Availability, integrity, and privacy can all be compromised by evasion, poisoning, privacy, and abuse attacks.
  • Attacker knowledge varies: Threats can be carried out by actors with full, partial, or minimal knowledge of the AI system.
  • Mitigation challenges: Robust defenses are currently lacking, and the tech community needs to prioritize their development.
  • Global concern: NIST's warning echoes recent international guidelines emphasizing secure AI development.

Overall:

NIST identifies serious security and privacy risks associated with the rapid deployment of AI systems, urging the tech industry to develop better defenses and implement secure development practices.

Comment:

From the look of things, it looks like it's going to get worse before it gets better.

[–] Elephant0991@lemmy.bleh.au 1 points 8 months ago

Of course if you care to look carefully, and sometimes closely.

 

  • Previous images of Neptune and Uranus, particularly from Voyager 2, were inaccurately blue and green due to image processing for detail enhancement.
  • New research analyzed data from Hubble and VLT telescopes to reveal their "true" colors are both a similar shade of greenish-blue.
  • Neptune still has a slightly bluer tinge due to a thinner haze layer.
  • Uranus may appear slightly greener in summer/winter but bluer in spring/autumn due to its unique tilt.
  • This research corrects a long-held misconception about these distant planets.
[–] Elephant0991@lemmy.bleh.au 6 points 8 months ago

Deeply discounted, yet with the satisfying conclusion that our external clients get! /s

 

cross-posted from: https://zerobytes.monster/post/5063838

I guess if the law firm handles its own data breach this way; you can expect the companies to handle the breaches the same way.

Summary

The international law firm Orrick, Herrington & Sutcliffe, specializing in handling security incidents for companies, suffered a cyberattack in March 2023, resulting in the exposure of sensitive health information belonging to over 637,000 data breach victims.

The stolen data included consumer names, dates of birth, postal address and email addresses, and government-issued identification numbers, such as Social Security numbers, passport and driver license numbers, and tax identification numbers. The data also includes medical treatment and diagnosis information, insurance claims information — such as the date and costs of services — and healthcare insurance numbers and provider details.

Orrick, serving as legal counsel during security incidents at other companies, revealed that the breach also affected clients such as EyeMed Vision Care, Delta Dental, MultiPlan, Beacon Health Options, and the U.S. Small Business Administration. The number of affected individuals tripled since the initial disclosure. Orrick reached a settlement for class action lawsuits in December, which accused Orrick of failing to inform victims of the breach until months after the incident, acknowledging the incident's impact and expressing regret for the inconvenience caused. The firm did not disclose details about the hackers' entry or whether a financial ransom was demanded.

 

Summary:

The article discusses the phenomenon of microchimerism, where cells from a developing fetus can integrate into the mother's body and persist for years, potentially influencing various aspects of health. This bidirectional transfer of cells between mother and fetus during pregnancy is suggested to occur in various organs, such as the heart, lungs, breast, colon, kidney, liver, and brain. These cells, referred to as microchimeric cells, are genetically distinct entities that may play a role in immune system development, organ acceptance in transplantation, and even influencing behavior.

Researchers propose that microchimeric cells might impact susceptibility to diseases, pregnancy success, and overall health. Studies in mice suggest that these cells acquired during gestation could fine-tune the immune system and contribute to successful pregnancies. The article explores potential benefits and drawbacks of microchimerism, including its role in autoimmune diseases, organ acceptance in transplantation, and pregnancy complications.

Despite the widespread presence of microchimeric cells in individuals, many aspects of their function remain unclear, leading to debates among researchers. Some scientists believe that these cells may be influential architects of human life, potentially holding therapeutic implications for conditions like autoimmune diseases and high-risk pregnancies. However, challenges in studying microchimerism, including their rarity and hidden locations within the body, contribute to the ongoing controversy and uncertainty surrounding their significance.

 

Summary:

The author reflects on the challenges of memory and highlights a forgotten but valuable feature of Google Assistant on Android. The feature, called "Open memory," serves as a hub for Assistant's cross-platform information-storing system. Users can ask Google Assistant to remember specific information, and the "Open memory" command allows them to access a comprehensive list of everything stored, making it a useful tool for recalling details from any device connected to Google Assistant. The article emphasizes the potential of this feature for aiding memory and suggests incorporating it into daily habits for better recall.

[–] Elephant0991@lemmy.bleh.au 1 points 8 months ago

That seems totally workable. Spin it, and you have artificial gravity. You can be in fact riding a Spinning Space Dick.

[–] Elephant0991@lemmy.bleh.au 12 points 8 months ago (2 children)
[–] Elephant0991@lemmy.bleh.au 5 points 8 months ago

Oh, the horror! I think I did the best I possibly could, given the circumstances.

[–] Elephant0991@lemmy.bleh.au 5 points 8 months ago

I guess people will cheat and hide it everywhere.

[–] Elephant0991@lemmy.bleh.au 48 points 1 year ago* (last edited 1 year ago)

While corporate America focuses on mainly profits, "fighting for human rights" are just empty slogan, because corporate America is already exploiting human misery for profits. For government, it's going to be "to prevent China from becoming the dominant tech power in the developing world" that's going to drive this sort of initiative, which most likely will have mixed results or fail miserably altogether. Chinese exports are already driving the non-elite consumer markets in the developing worlds.

[–] Elephant0991@lemmy.bleh.au 6 points 1 year ago

When I forgot part of my my old password, I came up with a list of words that I possibly could have come up with and tried those. I eventually found it even if I was panicky the whole time. If I were you, I would list the words and try them in the order of probabilities.

Un/Fortunately, BW is implemented to rate-limit password brute-forcing. I feel you about your CAPTCHA hell, and I hate their surreal sunflower CAPTCHA (maybe to make it as repulsive as possible to the hackers?).

[–] Elephant0991@lemmy.bleh.au 13 points 1 year ago* (last edited 1 year ago)

True.

  • Automatic patch => automatic installation of malware

  • Manual patch => unpatched vulnerabilities

Screwed either way.

[–] Elephant0991@lemmy.bleh.au 22 points 1 year ago

Yeah, this is definitely a problem with brand new services, especially when the native app isn't appealing. For example, I use Liftoff for Lemmy. Open-sourced✅ In official Appstore✅ Relatively transparent who the developer is✅ No special permission starting off✅ Relatively few downloads📛 .

When a mobile app doesn't ask for permissions, it's definitely less nerve-racking than the more permissive desktop environments where the apps don't have to be special to do considerable damages.

 

Short Summary

The macOS app called NightOwl, originally designed to provide a night mode feature for Macs, has turned into a malicious tool that collects users' data and operates as part of a botnet. Originally well-regarded for its utility, NightOwl was bought by another company, and a recent update introduced hidden functionalities that redirected users' data through a network of affected computers. Web developer Taylor Robinson discovered that the app was running a local HTTP proxy without users' knowledge or consent, collecting users' IP addresses and sending the data to third parties. The app's certificate has been revoked, and it is no longer accessible. The incident highlights the risks associated with third-party apps that may have malicious intentions after updates or ownership changes.

Longer Summary

The NightOwl app was developed by Keeping Tempo, an LLC that went inactive earlier this year. The app was recently found to have been turned into a botnet by the new owners, TPE-FYI, LLC. The original developer, Michael Kramser, claims that he was unaware of the changes to the app and that he sold the company last year due to time constraints.

Gizmodo was unable to reach TPE-FYI, LLC for comment. However, the internet sleuth who discovered the botnet, Will Robinson, said that it is not uncommon for shady companies to buy apps and then monetize them by integrating third-party SDKs that harvest user data.

Robinson also said that it is understandable why developers might sell their apps, even if it means sacrificing their morals. App development is both hard and expensive, and for individual creators, it can be tempting to take the money and run.

This is not the first time that a popular app has been turned into a botnet. In 2013, the Brightest Flashlight app was sued by the Federal Trade Commission after allegedly transmitting users' location data and device info to third parties. The developer eventually settled with the FTC for an undisclosed amount.

In 2017, software developers discovered that the Stylish browser extension started recording all of its users' website visits after the app was bought by SimilarWeb. Another extension, The Great Suspender, was flagged as malware after it was sold to an unknown group back in 2020.

All of these apps had millions of users before anyone recognized the signs of intrusion. In these cases, the new app owners' shady efforts were all to support a more-intrusive version of harvesting data, which can be sold to third parties for an effort-free, morals-free payday.

Possible Takeaways

  • Minimize the software you use

  • Keep track of ownership changes

  • Use software from only the most reputable sources

  • Regularly review installed apps

  • Be suspicious about app's unexpected behaviors and permissions

 

Summary

  • The Marion County Record newsroom in Kansas was raided by police, who seized two cellphones, four computers, a backup hard drive, and reporting materials.

  • A computer seized was most likely unencrypted. Law enforcement officials hope that devices seized during a raid are unencrypted, as this makes them easier to examine.

  • Modern iPhones and Android phones are encrypted by default, but older devices may not be.

  • Desktop computers typically do not have encryption enabled by default, so it is important to turn this on manually.

  • Use strong random passwords and keep them in a password manager.

  • During the raid, police seized a single backup hard drive. It is important to have multiple backups of your data in case one is lost or stolen.

  • You can encrypt USB storage devices using BitLocker To Go on Windows, or Disk Utility on macOS.

  • All major desktop operating systems support Veracrypt, which can be used to encrypt entire drives.

Main Take-aways

  • Encrypt your devices, drives, and USBs.

  • Use strong random passwords and password manager.

  • Have multiple backups.

 

Paper & Examples

"Universal and Transferable Adversarial Attacks on Aligned Language Models." (https://llm-attacks.org/)

Summary

  • Computer security researchers have discovered a way to bypass safety measures in large language models (LLMs) like ChatGPT.
  • Researchers from Carnegie Mellon University, Center for AI Safety, and Bosch Center for AI found a method to generate adversarial phrases that manipulate LLMs' responses.
  • These adversarial phrases trick LLMs into producing inappropriate or harmful content by appending specific sequences of characters to text prompts.
  • Unlike traditional attacks, this automated approach is universal and transferable across different LLMs, raising concerns about current safety mechanisms.
  • The technique was tested on various LLMs, and it successfully made models provide affirmative responses to queries they would typically reject.
  • Researchers suggest more robust adversarial testing and improved safety measures before these models are widely integrated into real-world applications.
view more: next ›