BarryZuckerkorn

joined 1 year ago
[–] BarryZuckerkorn@beehaw.org 5 points 3 weeks ago

increasing layers of Innuendo

Well, also, these are documents written in the past, before 1948, when the Supreme Court invalidated the effect of racial covenants.

But the language remains, with no legal effect. But it's still there and should be eliminated. There's no cat and mouse game, just the need for cleanup of something left from the past.

[–] BarryZuckerkorn@beehaw.org 4 points 1 month ago (1 children)

Define "breast." On the one hand, rows of nipples (cats and dogs) or multiple nipples from an udder (cows, elephants) don't really seem "breast" like.

And for sure platypus excretions from their skin, without nipples, is certainly not breast like.

[–] BarryZuckerkorn@beehaw.org 1 points 1 month ago

This isn't my field, and some undergraduate philosophy classes I took more than 20 years ago might not be leaving me well equipped to understand this paper. So I'll admit I'm probably out of my element, and want to understand.

That being said, I'm not reading this paper with your interpretation.

This is exactly what they've proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).

But they've defined the AI-by-Learning problem in a specific way (here's the informal definition):

Given: A way of sampling from a distribution D.

Task: Find an algorithm A (i.e., ‘an AI’) that, when run for different possible situations as input, outputs behaviours that are human-like (i.e., approximately like D for some meaning of ‘approximate’).

I read this definition of the problem to be defined by needing to sample from D, that is, to "learn."

The explicit point is to show that it doesn't matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI

But the caveat I'm reading, implicit in the paper's definition of the AI-by-Learning problem, is that it's about an entire class of methods, of learning from a perfect sample of intelligent outputs to itself be able to mimic intelligent outputs.

General Intelligence has a set definition that the paper's authors stick with. It's not as simple as "it's a human-like intelligence" or something that merely approximates it.

The paper defines it:

Specifically, in our formalisation of AI-by-Learning, we will make the simplifying assumption that there is a finite set of possible behaviours and that for each situation s there is a fixed number of behaviours Bs that humans may display in situation s.

It's just defining an approximation of human behavior, and saying that achieving that formalized approximation is intractable, using inferences from training data. So I'm still seeing the definition of human-like behavior, which would by definition be satisfied by human behavior. So that's the circular reasoning here, and whether human behavior fits another definition of AGI doesn't actually affect the proof here. They're proving that learning to be human-like is intractable, not that achieving AGI is itself intractable.

I think it's an important distinction, if I'm reading it correctly. But if I'm not, I'm also happy to be proven wrong.

[–] BarryZuckerkorn@beehaw.org 2 points 1 month ago

I can't think of a scenario where we've improved something so much that there's just absolutely nothing we could improve on further.

Progress itself isn't inevitable. Just because it's possible doesn't mean that we'll get there, because the history of human development shows that societies can and do stall, reverse, etc.

And even if all human societies tends towards progress, it could still hit dead ends and stop there. Conceptually, it's like climbing a mountain through the algorithm of "if there is a higher elevation near you, go towards that, and avoid stepping downward in elevation." Eventually that algorithm brings you to a local peak. But the local peak might not be the highest point on the mountain, and while it is theoretically possible to have gotten to the other true peak from the beginning, the person who is insistent on never stepping downward is now stuck. Or, it's possible to get to the true peak but it requires climbing downward for a time and climbing up past elevations we've already been to, on paths we hadn't been on. One can imagine a society that refuses to step downward, breaking the inevitability of progress.

This paper identifies a specific dead end and advocates against hoping for general AI through computational training. It is, in effect, arguing that even though we can still see plenty of places that are higher elevation than where we are standing, we're headed towards a dead end, and should climb back down. I suspect that not a lot of the actual climbers will heed that advice.

[–] BarryZuckerkorn@beehaw.org 1 points 1 month ago (2 children)

That's assuming that we are a general intelligence.

But it's easy to just define general intelligence as something approximating what humans already do. The paper itself only analyzed whether it was feasible to have a computational system that produces outputs approximately similar to humans, whatever that is.

True, they've only calculated it'd take perhaps millions of years.

No, you're missing my point, at least how I read the paper. They're saying that the method of using training data to computationally develop a neural network is a conceptual dead end. Throwing more resources at the NP-hard problem isn't going to solve it.

What they didn't prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It's just that this particular method of inferential training, what they call "AI-by-Learning," is an NP-hard computational problem.

[–] BarryZuckerkorn@beehaw.org 8 points 1 month ago

Though a superhero, Bruce Schneier disdains the use of a mask or secret identity as 'security through obscurity'.

source

[–] BarryZuckerkorn@beehaw.org 10 points 1 month ago (4 children)

The paper's scope is to prove that AI cannot feasibly be trained, using training data and learning algorithms, into something that approximates human cognition.

The limits of that finding are important here: it's not that creating an AGI is impossible, it's just that however it will be made, it will need to be made some other way, not by training alone.

Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.

So it may still be the case that AGI via computation alone is possible, and that creating such an AGI will not require solution of an NP-hard problem. But this paper closes one potential pathway that many believe is a viable pathway (if the paper's proof is actually correct, I definitely am not the person to make that evaluation). That doesn't mean they've proven there's no pathway at all.

[–] BarryZuckerkorn@beehaw.org 5 points 2 months ago

Yes but they only performed the training on the posts and images set to be globally publicly accessible by anyone. In a sense, they took the public permissions as an indicator that they could use that data for more than just providing the bare social media service.

[–] BarryZuckerkorn@beehaw.org 7 points 2 months ago (3 children)

Isn't the opt-out option to just not make the photos/posts globally public?

[–] BarryZuckerkorn@beehaw.org 2 points 3 months ago

It's a bill to create technical standards by which anyone can mark their digital files with a rough analogue of a robots.txt that says "don't train on this file," and a requirement for AI training to obey that standard. It's for everyone, because copyright is for everyone who creates pretty much anything.

[–] BarryZuckerkorn@beehaw.org 3 points 3 months ago (1 children)

Plus the net worths of Trump's cabinet is basically the highest in history:

  • Betsy Devos (Education): $2 billion
  • Wilbur Ross (Commerce): $600 million
  • Steve Mnuchin (Treasury): $400 million

Linda McMahon is worth $3 billion but was "only" SBA administrator, not a cabinet secretary.

Jared Kushner is from a family whose net worth is in the billions between all of them, and he got his dad a pardon.

Let's not forget major donors like the Adelsons, the Kochs, and everyone else.

Yes, Democrats have some billionaires on their side, but the balance isn't even close.

[–] BarryZuckerkorn@beehaw.org 31 points 3 months ago

In my opinion, it's quite similar to Brexit: maybe you can get a majority coalition to disapprove of the status quo, but good luck getting them to actually propose a more popular alternative. Much less proposing an actual procedure for getting that alternative onto ballots.

Structurally and functionally, our political systems are not set up to run anyone other than the person who won the primary. Changing a presumptive nominee this late in the cycle is fraught with potential complications, but can be done if there's sufficient support for a specific alternative candidate. Realistically, it's Biden or it's Harris. There's no feasible way to get someone else at the top of the ticket.

 

What's something you love, and love describing or explaining to people who are new to that interest, hobby, or activity?

 

I now have a working Linux installation on my laptop. Honestly, I doubted I'd ever be here again.

I quit my sysadmin job a little over 10 years ago to pursue a non-technical career (law school, now lawyer), and I just didn't have the mental bandwidth to keep up with all the changes being made in the Linux world: systemd, wayland, the rise of docker and containerization, etc. Eventually, by 2015, I basically gave up on Linux as my daily driver. Still, when I bought a new laptop in 2019, I made sure to pick the Macbook with the best Linux hardware support at the time (the 2017 13" Macbook Pro without the touchbar or any kind of security chip, aka the 14,1). Just in case I ever wanted to give Linux a try again.

When the reddit API/mod controversy was brewing this summer, I switched over to lemmy as my primary "forum," and subscribed to a bunch of communities. And because lemmy/kbin seemed to attract a lot of more tech-minded, and a little bit more anti-authoritarian/anti-corporate folks, the discussions in the threads started to normalize the regular use of Linux and other free/open source software as a daily driver.

So this week, I put together everything I needed to dual boot Linux and MacOS: boot/installation media for both MacOS and Linux, documentation specific to my Apple hardware, as well as the things that have changed since my last Linux laptop (EFI versus BIOS, systemd-boot versus grub2, iwd versus wpa-supplicant, Wayland versus X, etc.). I made a few mistakes along the way, but I managed to learn from them, fix a few misconfigured things, and now have a working Linux system!

I still have a bunch of things to fix on my to-do list: sound doesn't work (but there's a script that purports to fix that), suspend doesn't work (well, more accurately, I can't come back from suspend), text/icon size and scaling aren't 100% consistent on this high DPI screen, network discovery stuff doesn't work (I think I need to install zeroconf but I don't know what it is and intend to understand it before I actually install and configure it), I'd like a pretty bootloader splash screen, still have to configure bash (or another shell? do people still use bash?) the way I like it.

But my system works. I have a desktop environment with a working trackpad (including haptic feedback), hardware keys for volume (never mind sound doesn't actually work yet), screen brightness, and keyboard backlight brightness. I have networking. The battery life seems to be OK. Once I get comfortable with this as a daily driver, I might remove MacOS and dive right into a single OS on this device.

So thank you! Y'all are the best.

view more: next ›