this post was submitted on 17 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Hi u/Load-Consideration-2! I am currently writing a book on this and some of the chapters are already available: https://tsetlinmachine.org. There is also source code for many of the latest advances here: https://github.com/cair/tmu.
Logical learning with the Tsetlin machine is fully transparent. Still, it is similar to neural networks because it learns non-linear patterns, supports convolution, and learns online, one example at a time.
The Tsetlin machine is only 5 years old, and our biggest challenge is actually not inductive bias, but too high expression power that gives overfitting, just like neural networks before us. There is lots of ongoing research and progress here, and I think we have only seen the beginning.
Here is a recent paper that illustrates the benefits of Tsetlin machines in natural language processing and image analysis: https://ojs.aaai.org/index.php/AAAI/article/view/26588. Here is a paper on medical image analysis: https://arxiv.org/abs/2301.10181.
Where the Tsetlin machine currently excels is energy-constrained edge machine learning, where you can get up to 10000x less energy consumption and 1000x faster inference (https://www.mignon.ai).
My goal is to create an alternative to BigTech’s black boxes: free, green, transparent, and logical (http://cair.uia.no).
This is so cool. I’ve never heard of your creation before this post. I’m a pure mathematician turned statistician just getting their feet wet with neural networks and other more modern approaches to regression and classification. This is very cool work you’re doing.