this post was submitted on 31 Oct 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

Hey guys. Some time ago I've developed an application/framework that lets you search (using distributed GA) for "smart" networks - a network built from only NAND gates with some memory block connected to its inputs and outputs. The way the learning should happen is continous as the memory is updated after every calculation of outputs.

The idea here is to use fundamental computing blocks (an universal gate circuit/network + memory) to construct a system that is able to learn. The search is done via GA distributed over many nodes.

Recently I've obtained more compute and came back to playing with it just I'm wondering if you would have some pointers as to how to configure it, more precisely:

  • What parameters of the GA search do you think would be optimal?
  • What kind of task (set of tasks/levels) and fitness function based on the results of those tasks could actually drive the search for something that is able to learn?
  • Does the idea even hold against argument? Or does the compute needed would really need to be massive to arrive at any sensible result?

Blog post about the project + source code (GH):

Blog post with more details

Source code

top 6 comments
sorted by: hot top controversial new old
[–] avialex@alien.top 1 points 10 months ago

I can't point you to any specific paper/project, but this seems pretty identical to a lot of stuff in genetic algorithm neural network learning. Which was basically discarded as an active area of research after massively parallel gradient descent on GPU's became possible. That said, I do still find this interesting, even if it has been covered before. Memory as an integral part of the process might be kind of novel, but I'm sure someone's tried it before. What is your philosophical goal in constructing this project?

[–] HomeworkParty69@alien.top 1 points 10 months ago

I'm not in a position to add anything but suggest maybe crossposting this to r/FPGA for ideas from folks thinking about HDL regularly.

[–] nikgeo25@alien.top 1 points 10 months ago (1 children)

No way that's exactly what I've been doing as a side project.

[–] topcodemangler@alien.top 1 points 10 months ago

Hah, well that's an interesting coincidence then. May I ask what is your general idea on the subject? How do you structure them and "train" them? As my assumption is that "training"/learning/etc. requires some form of feedback loop - where the result of the current inference/calculation must (or may) impact the calculation of the next one, in my solution this is done by adding the memory part.

[–] murxman@alien.top 1 points 10 months ago (1 children)

There is a really good implementation of parallel genetic algorithms for Python if you need it:

https://github.com/Helmholtz-AI-Energy/propulate

[–] topcodemangler@alien.top 1 points 10 months ago

Thanks, the idea was to use Rust also to learn it but based on the description the idea used there is the same or very similar to what I've implemented - so maybe there are some interesting way to improve and optimize what I got.