this post was submitted on 31 Oct 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Hey guys. Some time ago I've developed an application/framework that lets you search (using distributed GA) for "smart" networks - a network built from only NAND gates with some memory block connected to its inputs and outputs. The way the learning should happen is continous as the memory is updated after every calculation of outputs.

The idea here is to use fundamental computing blocks (an universal gate circuit/network + memory) to construct a system that is able to learn. The search is done via GA distributed over many nodes.

Recently I've obtained more compute and came back to playing with it just I'm wondering if you would have some pointers as to how to configure it, more precisely:

  • What parameters of the GA search do you think would be optimal?
  • What kind of task (set of tasks/levels) and fitness function based on the results of those tasks could actually drive the search for something that is able to learn?
  • Does the idea even hold against argument? Or does the compute needed would really need to be massive to arrive at any sensible result?

Blog post about the project + source code (GH):

Blog post with more details

Source code

you are viewing a single comment's thread
view the rest of the comments
[–] topcodemangler@alien.top 1 points 1 year ago

Thanks, the idea was to use Rust also to learn it but based on the description the idea used there is the same or very similar to what I've implemented - so maybe there are some interesting way to improve and optimize what I got.