this post was submitted on 03 May 2024
106 points (90.2% liked)

No Stupid Questions

35737 readers
2732 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

I know current learning models work a little like neurons but why not just make a sim that works exactly like how we understand neurons work

top 50 comments
sorted by: hot top controversial new old
[–] x86x87@lemmy.one 84 points 6 months ago (1 children)

Simulating even one neuron is very complex. Neurons in artificial neuron nets used in machine learning are a gross oversimplification. On top on this you need to get the wiring right. On top on this you need to get the sensorial system right (a brain without input is worthless). On top of this you need an environment. So it's multiple layers of complexity that we don't have

[–] BastingChemina@slrpnk.net 6 points 6 months ago (2 children)

What I find fascinating is the efficiency of the brain.

With a supercomputer and the energy of a nuclear station to run it we are able to simulate a handful of neurons interacting with each other.

On the other hand the brain with billions of neurons only requires the energy of one or two potato to run.

[–] x86x87@lemmy.one 3 points 6 months ago

To be fair, nature had millions od years to optimize the power consumption and we only observe the successful results since the failures didn't survive.

load more comments (1 replies)
[–] db2@lemmy.world 57 points 6 months ago (7 children)

Because we don't understand it.

[–] givesomefucks@lemmy.world 42 points 6 months ago (1 children)

To clarify:

We don't even know how human intelligence/consciousness works, let alone how to simulate it.

But we know how an individual neuron works.

The issue with OPs idea is we don't know how to tell a computer what a bunch of neurons do to create an intelligence/consciousness.

[–] Neuromancer49@midwest.social 38 points 6 months ago (1 children)

Heck, we barely know how neurons work. Sure, we've got the important stuff down like action potentials and ion channels, but there's all sorts of stuff we don't fully understand yet. For example, we know the huntingtin protein is critical to neuron growth (maybe for axons?), and we know if the gene has too many mutations it causes Huntington's disease. But we don't know why huntingtin is essential, or how it actually effects neuron growth. We just know that cells die without it, or when it is misformed.

Now, take that uncertainty and multiply it by the sheer number of genes and proteins we haven't fully figured out and baby, you've got a stew going.

[–] subignition@kbin.social 16 points 6 months ago

To add to this, a new type of brain cell was discovered just last year. (I would have linked directly to the study but there was a server error when I followed the cite.)

[–] IvanOverdrive@lemm.ee 4 points 6 months ago

To understand the complexity of the human brain, you need a brain more complex than the human brain.

load more comments (5 replies)
[–] ninpnin@sopuli.xyz 55 points 6 months ago (2 children)

We don’t really understand how real neurons learn.

[–] Neuromancer49@midwest.social 12 points 6 months ago (1 children)

We've got some really good theories, though. Neurons make new connections and prune them over time. We know about two types of ion channels within the synapse - AMPA and NMDA. AMPA channels open within the post-synapse neuron when glutamate is released by the pre-synapse neuron. And the AMPA receptor allows sodium ions into the dell, causing it to activate.

If the post-synapse cell fires for a long enough time, i.e. recieves strong enough input from another cells/enough AMPA receptors open, the NMDA receptor opens and calcium enters the cell. Typically an ion of magnesium keeps it closed. Once opened, it triggers a series of cellular mechanisms that cause the connection between the neurons to get stronger.

This is how Donald Hebb's theory of learning works. https://en.wikipedia.org/wiki/Hebbian_theory?wprov=sfla1

Cells that fire together, wire together.

[–] onion@feddit.de 6 points 6 months ago

Name checks out

load more comments (1 replies)
[–] WolfLink@lemmy.ml 51 points 6 months ago (4 children)

Short answer: Neural Networks and other “machine learning” technologies are inspired by the brain but are focused on taking advantage of what computers are good at. Simulating actual neurons is possible but not something computers are good at so it will be slow and resource intensive.

Long Answer:

  1. Simulating neurons is fairly complex. Not impossible; we can simulate microscopic worms, but simulating a human brain of 100 billion neurons would be a bit much even for modern supercomputers
  2. Even if we had such a simulation, it would run much slower than realtime. Note that such a simulation would involve data sent between networked computers in a supercomputing cluster, while in the brain signals only have to travel short distances. Also what happens in the brain as a simple chemical release would be many calculations in a simulation.
  3. “Training” a human brain takes years of constant input to go from a baby that isn’t capable of much to a child capable of speech and basic reasoning. Training an AI simulation of a human brain is at least going to take that long (plus longer given that the simulation will be slower)
  4. That human brain starts with some basic programming that we don’t fully understand
  5. Theres a lot more about the human brain we don’t fully understand
load more comments (4 replies)
[–] PhlubbaDubba@lemm.ee 22 points 6 months ago (1 children)

That's kinda the idea of neural network AI

The problem is that neurons aren't transistors, they don't operate in base 2 arithmetic, and are basically an example of chaos theory, where a system is narrow enough for outer bounds to be defined, yet complex enough that the amount of "picture resolution" needed to be able to accurately predict how it will behave is currently beyond our scope of understanding to replicate or even theorize on.

This is basically the realm where you're no longer asking for math to fetch a logical answer to a question and more trying to use it as a way to perfectly calculate the future like an oracle trying to divine one's own fate from the stars. It even comes with its own system of cool runes!

I fully imagine we will have a precise calculation of Rayo's Number before we have a binary computer capable of being raised as a human with a fully human intelligence and emotional depth.

More likely I see the "singularity" coming in the form of someone who figures out how to augment human intelligence with an AI neural implant capable of the sorts of complex calculations that are impossible for a human mind to fathom while benefiting from human abilities for pattern recognition to build more accurate models.

If someone figures out how to do this without accidentally creating a cheap 80's slasher villain, it will immediately become the single most sought after medical device in human history, as these new augmented mind humans will instantly become a major competitive pressure for even most manual labor jobs.

[–] los_chill@programming.dev 20 points 6 months ago (3 children)

Neurons undergo physical change in their interconnectivity. New connections (synapses) are created, strengthened, and lost over time. We don't have circuits that can do that.

[–] Neuromancer49@midwest.social 15 points 6 months ago (1 children)

Actually, neuron-based machine learning models can handle this. The connections between the fake neurons can be modeled as a "strength", or the probability that activating neuron A leads to activation of neuron B. Advanced learning models just change the strength of these connections. If the probability is zero, that's a "lost" connection.

Those models don't have physical connections between neurons, but mathematical/programmed connections. Those are easy to change.

[–] FooBarrington@lemmy.world 10 points 6 months ago (1 children)

That's a vastly simplified model. Real neurons can't be approximated with a couple of weights - each neuron is at least as complex as a multi-layer RNN.

[–] TempermentalAnomaly@lemmy.world 3 points 6 months ago (1 children)

I'd love to know more.

I recently read "The brain is a computer is a brain: neuroscience’s internal debate and the social significance of the Computational Metaphor" and found it compelling. It bristled a lot of feathers on Lemmy, but think their critique is valid.

Do you have any review resources? I have a bit of knowledge around biology and biochemistry, but haven't studied neuroscience.

And no pressure. It's a lot to ask dor some random person on the internet. Cheers!

[–] FooBarrington@lemmy.world 3 points 6 months ago

Here's the video that introduced me to the idea: https://www.youtube.com/watch?v=hmtQPrH-gC4

He explains it very well and gives a lot of references :)

[–] RememberTheApollo_@lemmy.world 7 points 6 months ago (2 children)

Did OP mean accomplishing the connectivity and with software rather than hardware? No, we don’t have hardware that can modify itself like a brain does, but I think it is possible to accomplish that with coding.

[–] palebluethought@lemmy.world 9 points 6 months ago* (last edited 6 months ago) (1 children)

Sure, but now you're talking about running a physical simulation of neurons. Real neurons aren't just electrical circuits. Not only do they evolve rapidly over time, they're powerfully influenced by their chemical environment, which is controlled by your body's other systems, and so on. These aren't just minor factors, they're central parts of how your brain works.

Yes, in principle, we can (and have, to some extent) run physical simulations of neurons down to the molecular resolution necessary to accomplish this. But the computational power required to do that is massively, like billions of times, more expensive than the "neural networks" we have today, which are really just us anthropomorphizing a bunch of matrix multiplication.

It's simply not feasible to do this at a scale large enough to be useful, even with all the computation on Earth.

load more comments (1 replies)
[–] Dkarma@lemmy.world 4 points 6 months ago

Performance suffers. Basically we don't have the computing power to scale the sw to the perf levels of the human brain.

[–] masterspace@lemmy.ca 4 points 6 months ago* (last edited 6 months ago)

Yes we do. FPGAs and memristors can both recreate those effects at the hardware level. The problem is scaling them and their necessary number of interconnections to the number of neurons in the human brain, on top of getting their base wiring and connections close to how our genetics build and wires our base brains.

[–] rtfm_modular@lemmy.world 19 points 6 months ago (1 children)

First, we don’t understand our own neurons enough to model them.

AI’s “neuron” or node is a math equation that takes a numeric input with a variable “weight” that affects the output. An actual neuron a cell with something like 6000 synaptic connections each and 600 trillion synapses total. How do you simulate that? I’d argue the magic of AI is how much more efficient it is comparatively with only 176 billion parameters in GPT4.

They’re two fundamentally different systems and so is the resulting knowledge. AI doesn’t need to learn like a baby, because the model is the brain. The magic of our neurons is their plasticity and our ability to freely move around in this world and be creative. AI is just a model of what it’s been fed, so how do you get new ideas? But it seems that with LLMs, the more data and parameters, the more emergent abilities. So we just need to scale it up and eventually we can raise the.

AI does pretty amazing and bizarre things today we don’t understand, and they are already using giant expensive server farms to do it. AI is super compute heavy and require a ton of energy to run. So, the cost is a rate limiting the scale of AI.

There are also issues related to how to get more data. Generative AI is already everywhere and what good s is it to train on its own shit? Also, how do you ethically or legally get that data? Does that data violate our right to privacy?

Finally, I think AI actually possess an intelligence with an ability to reason, like us. But it’s fundamentally a different form of intelligence.

[–] Phanatik@kbin.social 3 points 6 months ago (3 children)

I mainly disagree with the final statement on the basis that the LLMs are more advanced predictive text algorithms. The way they've been set up with a chatbox where you're interacting directly with something that attempts human-like responses, gives off the misconception that the thing you're talking to is more intelligent than it actually is. It gives off a strong appearance of intelligence but at the end of the day, it predicts the next word in a sentence based on what was said previously but it doesn't do that good job of comprehending what exactly it's telling you. It's very confident when it gives responses which also means when it's wrong, it's very confidently delivering the incorrect response.

load more comments (3 replies)
[–] epicsninja@lemmy.world 16 points 6 months ago

With current technology, a supercomputer capable of that would be absolutely gigantic, immobile, and have an insane power draw. How're you going to raise a building like a human?

Currently, a mouse brain is about the limit of what we can do. https://www.cell.com/neuron/fulltext/S0896-6273(20)30067-2

https://alleninstitute.org/news/scientists-recreated-part-of-the-mouse-brain-on-a-computer-and-showed-it-movies/

[–] JackGreenEarth@lemm.ee 12 points 6 months ago (1 children)

There's actually a Robert Miles video on this very question.

https://youtu.be/eaYIU6YXr3w

[–] Oaksey@lemmy.world 3 points 6 months ago* (last edited 6 months ago)

Was wondering if Robert Miles - Children had a music video with a lot of foresight.
https://youtu.be/DvyCbevQbtI

[–] swiftcasty@kbin.social 11 points 6 months ago* (last edited 6 months ago) (1 children)

Hardware limitations. A model that big would require millions of video cards, thousands of terabytes of storage, and hundreds of terabytes of ram.

This is also where AI ethics plays into whether such a model should exist in the first place. People are really scared of AI but they don’t know that ethics standards are being enforced at the top level.

Edit: get Elon Musk on the phone, he’s deranged enough to spend that much money on something like this while ignoring the ethical and moral implications /s

[–] seaQueue@lemmy.world 9 points 6 months ago* (last edited 6 months ago)

Edit: get Elon Musk on the phone, he’s deranged enough to spend that much money on something like this while ignoring the ethical and moral implications /s

You joke but he'd probably traumatize a synthetic intelligence enough that it'd think 4chan user behavior is the baseline human standard

[–] letsgo@lemm.ee 9 points 6 months ago

A programmer's pet peeve is someone who says "why can't you just...".

But the fundamental problem with your plan, assuming it's possible at all - it's been said that if the brain were simple enough for us to understand then we'd be too simple to understand it - is that you're going to want to make your AI at least as smart as someone who's 30-40 years old, which by definition would take 30-40 years.

[–] rufus@discuss.tchncs.de 9 points 6 months ago* (last edited 6 months ago)

Simple answer: We don't have any computer to run that on. While I don't see any absolute limitations ruling out that approach... The human brain seems to have hundreds or thousands of trillions of connections. With analog electrical impulses and chemistry. That's still sci-fi and even the largest supercomputers can't do it as of today. I think scientists already did it for smaller brains like those from flies(?), so the concept should work.

And then there is the question what are you going to do with it. You can't just kill a human, freeze the brain, slice it and then digitize it by looking at a microscope a trillion times. So you have to make it learn from ground up. And this requires a connection to a body. So you also need to simulate a whole body and the world it's in on top. To make it learn anything and not just activate random neurons. So that's going to be sci-fi (like the Matrix) for the near and mid future.

[–] mechoman444@lemmy.world 8 points 6 months ago

You wouldn't need to raise it as a baby.

The reason that humans come out as babies in the first place is because if we came out with fully developed brains, our heads would be crushed through the birth canal and the mother would probably die. Therefore, our brains have to mature as we get older which of course takes decades.

Growing up is a biological imperative.

In terms of artificial intelligence or large language models, there would be no need to actually grow in physical size.

Which solidifies the point a person already made here is that it would be a fundamentally different kind of intelligence one that simply needs data input And will not need the ability to grow up as a child would.

[–] olafurp@lemmy.world 8 points 6 months ago (1 children)

AI is a very slow learner still. The base OS for humans is really advanced with hormones biases built in and a initial structure connected to input and outputs.

Sure, it's possible but we're not there yet. It could be still 10-100 years until we manage to get a good one, depending on how we don't know yet.

[–] Corkyskog@sh.itjust.works 3 points 6 months ago

Didn't they just discover a new brain component recently?

I think this is what I was thinking of

[–] driving_crooner@lemmy.eco.br 7 points 6 months ago (1 children)

You can't raise it like a human because is not a human. Are you going to put it the size of baby? Gonna pump it with hormones that change its structure when it becomes a teen?

load more comments (1 replies)
[–] themeatbridge@lemmy.world 6 points 6 months ago

Learning models operate like neurons in that they make connections based on experiences (data). But that's like saying a microwave works like a chef in that it heats up food. We can't build a microwave that can run a kitchen, design a menu, take a bump in the walk-in, and fire off dishes the way a chef will.

[–] Neuromancer49@midwest.social 6 points 6 months ago* (last edited 6 months ago)

It's not a terrible idea by any means. It's pretty hard to do, though. Check out the Blue Brain Project. https://en.wikipedia.org/wiki/Blue_Brain_Project?wprov=sfla1

ETA: not to mention the brain is a heck of a lot more than a collection of neurons. Other commenters pointed out how we just discovered a new kind of brain cell - the brain is filled with so many different types of neurons (e.g. pyramidal, Purkinje, dopamine-based, myelinated, unmyelinated, internet Ron's, etc.). Then there's an entire class of "neuron support" cells called neuralgia. This includes oligodendrocytes (and Schwann cells), microglia, satellite cells, and most importantly, astrocytes. These star-shaped cells can have a huge impact on how neurons communicate by uptaking neurotransmitters and other mechanisms.

Here's more info: https://en.wikipedia.org/wiki/Tripartite_synapse?wprov=sfla1

[–] cygon@lemmy.world 6 points 6 months ago* (last edited 6 months ago)

Just some thoughts:

  • Current LLMs (chat AIs) are "frozen brains." (Over-)Simplified, the synapses on the AI's input neurons are given the 2048 prior words (the "context") and the AI's output synapses mean a different word each, so the synapse that lights up most strongly is the next word the AI will say. Then the picked word is added to the "context" and the neural network is executed once more for the next next word.

  • Coming up with the weights of the synapses takes insane effort (run millions of books through the "context" and look if the AI t predicts the next word correctly, if not, change a random synapse). Afaik, GPT-4 was trained on more than 2000 NVidia A100 GPUs for somewhere around 4 to 7 months, I think they mentioned paying for 7.5 Megawatt hours.

  • If you had a super computer that could keep running the AI with live training, the AI's ability to string up words would likely, and quickly, degrade into incoherence because it would just ingest and repeat whatever went into it. Existing biological brains have these complex mechanisms of distilling experiences and evaluating them in terms of usefulness/success of their own actions.

.

I think that foundation, that part that makes biological brains put the action/consequence in the foreground of the learning experience, rather than just ingesting, is what eludes us. Perhaps at some future point in time, we could take the initial brain structure that grows in a human as the seed for an AI (but I guess then we'd likely have to simulate all the highly complex traits of real neurons, including mixed chemical and electrical signaling and possibly even quantum-level effects that have been theorized).

[–] gregorum@lemm.ee 6 points 6 months ago* (last edited 6 months ago) (1 children)

Creating an accurate neuron simulation would probably require much more advanced AI than we already have. Like, real AI, not this piddly, piecemeal shit we have now.

You’re looking at this backwards. We’d need better AI to even start trying to simulate neurons accurately. They’re far more complex.

Currently, AI is capable of analyzing basic chemical and cellular interactions. It’s ok at it.

[–] Neuromancer49@midwest.social 4 points 6 months ago (2 children)

Actually, we've got some pretty sophisticated models of neurons. https://en.wikipedia.org/wiki/Blue_Brain_Project?wprov=sfla1

See my other comment for an example of how little we truly understand about neurons.

[–] gregorum@lemm.ee 5 points 6 months ago

Modeling neurons and simulating them with AI are very different things. And, as you say, we still know very little about neurons and the nervous system and the brain itself. How, then, could we even attempt to train an AI to work accurately?

load more comments (1 replies)
[–] JohnDClay@sh.itjust.works 4 points 6 months ago

We didn't know which things mechanisms in a nuron are important, and we don't have anywhere near the computing power to model all of them. We have guesses as to what's important, and that's what a lot of modern AI is built on. But because computers have different strengths and weaknesses, we can't simulate a whole human brain yet.

[–] cooljacob204@kbin.social 4 points 6 months ago* (last edited 6 months ago)

The one if the big reason that people are brushing over is latency. You can have a billion super computers simulator something but the latency between them will prevent you from simulating at a reasonable speed an interconnected system like a bunch of neurons.

load more comments
view more: next ›