phase_change

joined 1 year ago
[–] phase_change@sh.itjust.works 3 points 2 months ago

The person isn’t talking about automating being difficult for a hosted website. They’re talking about a third party system that doesn’t give you an easy way to automate, just a web gui for uploading a cert. For example, our WAP interface or our on-premise ERP don’t offer a way to automate. Sure, we could probably create code to automate it and run the risk it breaks after a vendor update. It’s easier to pay for a 12 month cert and do it manually.

[–] phase_change@sh.itjust.works 13 points 8 months ago* (last edited 8 months ago) (1 children)

Under the CMB method, it sounds like the calculation gives the same expansion rate everywhere. Under the Cepheid method, they get a different expansion rate, but it’s the same in every direction. Apparently, this isn’t the first time it’s been seen. What’s new here is that they did the calculation for 1000 Cepheid variable stars. So, they’ve confirmed an already known discrepancy isn’t down to something weird on the few they’ve looked at in the past.

So, the conflict here is likely down to our understanding of ether the CMB or Cepheid variables.

[–] phase_change@sh.itjust.works 101 points 8 months ago (6 children)

Except it’s not that they are finding the expansion rate is different in some directions. Instead they have two completely different ways of calculating the rate of expansion. One uses the cosmic microwave background radiation left over from the Big Bang. The other uses Cepheid stars.

The problem is that the Cepheid calculation is much higher than the CMB one. Both show the universe is expanding, but both give radically different number for that rate of expansion.

So, it’s not that the expansion’s not spherical. It’s that we fundamentally don’t understand something to be able to nail down what that expansion rate is.

And the article content posted is just an excerpt. The rest of the article focuses on how AI can improve the efficiency of workers, not replace them.

Ideally, you’ve got a learned individual using AI to process data more efficiently, but one that is smart enough to ignore or toss out the crap and knows to carefully review that output with a critical eye. I suspect the reality is that most of those individuals using AI will just pass it along uncritically.

I’m less worried about employees scared of AI and more worried about employees and employers embracing AI without any skepticism.

 

So, I’ve been self-hosting for decades, but on physical hardware. I’ve had things like MythTV and an asterisk voip system, but those have been abandoned for years. I’ve got a web server, but it’s serving static content that’s only viewed by bots and attackers.

My mail server, that’s been active for more than two decades is still in active use.

All of this makes me weird in the self-hosted community.

About a month ago, I put in a beefy system for virtualization with the intent to start branching out the self hosting. I primarily considered Proxmox and xcp-ng. I went with xcp-ng, primarily because it seems to have more enterprise features. I’m early enough in my exploration that switching isn’t a problem.

For those of you more advanced in a home-lab hypervisor, what did you go with and why? Right now, I’m pretty agnostic. I’m comfortable with xcp-ng but have no problems switching. I’m particularly interested in opinions that have a particularly negative view of one or the other, so long as you explain why.

Kids these days with their containers and their pipelines and their devops. Back in my day…

Don’t get me started about the internal devs at work. You’ve already got me triggered.

And, I can just imagine the posts they’re making about how the internal IT slows them down and causes issues with the development cycle.

 

TL;DR: old guy wants logs and more security in docker settings. Doesn’t want to deal with the modern world.

I’m on the sh.itjust.works lemmy instance. I don’t know how to reference another community thread so that it works for everyone, so my apologies for pointing at sh.itjust.works, but my thoughts here are inspired by https://sh.itjust.works/post/54990 and my attempts to set up a Lemmy server.

I’m old school. I’m in my mid-50’s. I was in academia as a student and then an employee from the mid-80’s through most of the 90’s. I’ve been in IT in the private sector since the late 90’s.

That means I was actively using irc and Usenet before http existed. I’ve managed publically facing mail and web servers in my job since the 90’s. I’ve run personal mail and web servers since the early 00’s. I even had a static HTML page that was the number one Google hit for an obscure financial search term for much of the 2000’s. The referer ip’s and search terms could probably have been mined for data.

On the work side, I’ve seen multiple email account compromises. (I’d note zero when it was on premise Lotus Notes. All of the compromises were after moving to O365. Those stopped for years once we moved to MFA, but this year we’ve seen two where the bad actors were able to MitM MFA. That said I don’t regret no longer supporting an on-prem Domino server: https://m.youtube.com/watch?v=Bk1dbsBWQ3k )

I’ve also seen a sophisticated vendor typo squatting email, combined with an internal email compromise cost us significant cash.

Other than email compromise, I’m not aware of any other intrusions. (There are two kinds of companies: those that know they’ve been hacked and those that don’t). I am friends with some IT people in a company where they were ransomwared. I still believe they have a tighter security stack than we do.

I’m paranoid about security because like Farmer’s I’ve seen a thing or two. We keep logs for a year, dumped into a SIEM that is designed to make it unlikely bad actors can get into it even if they take over A/D or VMWare. My home logging is less secure but still extensive. The idea is even if I’m hit, I hope I have the logs to help me understand how and how extensively.

I still have public websites at home, but they don’t contain any content that matters. The only traffic they see is attack attempts and indexers that will index them and then shove them down into oblivion. I’m fine with that.

I still run a mail server at home. It’s mostly used so all my unique email addresses (sh.itjust.works@foo.com) can get forwarded to my personal O365 instance. If I need to reply using a unique address, I use alpine in an ssh session.

Long prolog to explain my experience playing with a Lemmy instance this weekend. I’ve got an xcp-ng instance in the home lab and used it to get a Lemmy docker instance running. It’s not yet exposed to the outside world.

I’m new to docker. I’m new to Lemmy. I’m new to Nginx. (See the “old school” in the title.). At work and at home, I deal with Apache. I’ve got custom mod_rewrite rules and mod_security in place to deal with many attacks. I’m comfortable dealing with the tweaks on both for websites that break because of some rules.

I’ve tried putting an Apache proxy in front of my xcp-ng Lemmy instance, but it won’t work because Lemmy assumes an initial contact via http/1.1 with an http status code of 101 to push to http/2.0. Apache can proxy either but not both. And Lemmy isn’t happy of the initial connection is http/2.0.

I’m also uncomfortable with my lack of knowledge regarding Nginx. I don’t know how to recreate my mod_rewrite rules and I don’t think there’s an equivalent to mod_security.

Worse, I don’t see an easy way to retain docker logs. Yes, I can likely use volumes in a docker-compose.yml to retain them, but it’s far from clear what path that would be.

I know all of these are solveable concerns with some effort, but I suspect few put in that effort.

How do all of you who run containers in a home lab sleep at night knowing all that log data is ephemeral unless you take special effort? How do you sleep knowing the sample configs you are using in containers have little security built in?

Yeah, my hope is the small learning curve to join the fediverse means we don’t end up with the bulk of the active posters on reddit.

My fear is that Lemmy is about to see some attacks the fediverse isn’t ready to defend against.

 

It’s not even June 12 for me, yet I suspect many subreddits went dark based on UTC.

I moved to Reddit during the Digg migration. Thus, I got the default subscriptions from back in the day. Over the years, I’ve unsubscribed to things I felt were crap, and I’ve added a number of subreddits.

Already, many have gone dark. My old.Reddit.com homepage already looks much different than normal, and I know that a few subreddits that do show have announced they’ll go dark. I assume they are US based and timing that locally.

I’ve spent more time in the Lemmy fediverse than on Reddit since joining, but I’ve spent time on both.

I’ll admit to cynical skepticism of the impact of the darkening. I still don’t think it will make a difference in Reddit policy, but I now believe it will have a larger impact on Reddit traffic than I imagined.

I still expect it to have no change in Reddit attitude or really in Reddit users.

Yeah, Usenet is what my brain mapped Lemmy to. You get your feed and post through your server. You read posts from others on other servers. Each local server decides what feeds it will carry.

Of course, there’s no central hierarchy for the communities like Usenet had.