this post was submitted on 09 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

I'm a data engineer who somehow ended up as a software developer. So many of my friends are working now with the OpenAI api to add generative capabilities to their product, but they lack A LOT of context when it comes to how LLMs actually works.

This is why I started writing popular-science style articles that unpack AI concepts for software developers working on real-world application. It started kind of slow, honestly I wrote a bit too "brainy" for them, but now I've found a voice that resonance with this audience much better and I want to ramp up my writing cadence.

I would love to hear your thoughts about what concepts I should write about next?
What get you excited and you find hard to explain to someone with a different background?

you are viewing a single comment's thread
view the rest of the comments
[–] Jessynoo@alien.top 1 points 10 months ago (1 children)
  • Symbolic learning (Kbil etc.) has kind of faded away, and the whole chapter was nuked from AIMA. Waiting for its comeback.
  • Game theory has made a lot of progresses both on the algorithmic and the societal sides (Regret minimisation, bayesian and differential games, topology of elementary games, mechanism design, social choice theory etc). Hopefully it will get democratized at some point, because it is needed.
  • Probabilistic programming does not seem to get as much traction recently, but it seems the corresponding approaches extend ML and provide a bridge with Symbolic approaches.
  • Arg-tech and more generally semantic-web still seem niche, whereas LLMs are the perfect tools to finally get it done. It can also do some good to our current societal issues.
[–] pfaya@alien.top 1 points 10 months ago (1 children)
[–] Jessynoo@alien.top 1 points 10 months ago (1 children)

Argumentation technologies. A whole sub-branch extending Fol and modal logics. See Java's Tweety or Argument Interchange Format. Again my early tests suggest LLMs are very good at building belief sets, running reasoners and interpreting their results in layman's terms.

[–] pfaya@alien.top 1 points 10 months ago

Ah, you might find this interesting: https://compphil.github.io/truth/