olegranmo

joined 1 year ago
[–] olegranmo@alien.top 1 points 1 year ago

Sounds like an exciting problem! I guess leveraging the Tsetlin machine clauses can give a fresh take on the task. Tsetlin machines also support reasoning by elimination. That is, it can learn what the target isn’t instead of what it is, for increased robustness: https://www.ijcai.org/proceedings/2022/616

[–] olegranmo@alien.top 1 points 1 year ago (2 children)

Thanks, u/JustAnotherRedUser1 - the convolution chapter is coming next, in a few weeks. After that, the one on regression. Aim to complete the book in the next six months, so time series will be covered sometime before that.

[–] olegranmo@alien.top 1 points 1 year ago (6 children)

Hi u/Load-Consideration-2! I am currently writing a book on this and some of the chapters are already available: https://tsetlinmachine.org. There is also source code for many of the latest advances here: https://github.com/cair/tmu.

Logical learning with the Tsetlin machine is fully transparent. Still, it is similar to neural networks because it learns non-linear patterns, supports convolution, and learns online, one example at a time.
The Tsetlin machine is only 5 years old, and our biggest challenge is actually not inductive bias, but too high expression power that gives overfitting, just like neural networks before us. There is lots of ongoing research and progress here, and I think we have only seen the beginning.
Here is a recent paper that illustrates the benefits of Tsetlin machines in natural language processing and image analysis: https://ojs.aaai.org/index.php/AAAI/article/view/26588. Here is a paper on medical image analysis: https://arxiv.org/abs/2301.10181.
Where the Tsetlin machine currently excels is energy-constrained edge machine learning, where you can get up to 10000x less energy consumption and 1000x faster inference (https://www.mignon.ai).
My goal is to create an alternative to BigTech’s black boxes: free, green, transparent, and logical (http://cair.uia.no).