Hell, I’ve been training models and using ML directly for a decade and I barely know what’s going on in there.
Outside of low dimensional toy models, I don’t think we’re capable of understanding what’s happening. Even in academia, work on the ability to reliably understand trained networks is still in its infancy.
I remember studying “Probably Approximately Correct” learning and such, and it was a pretty cool way of building axioms, theorems, and proofs to bound and reason about ML models. To my knowledge, there isn’t really anything like it for large networks; maybe someday.
Outside of low dimensional toy models, I don’t think we’re capable of understanding what’s happening. Even in academia, work on the ability to reliably understand trained networks is still in its infancy.
I remember studying “Probably Approximately Correct” learning and such, and it was a pretty cool way of building axioms, theorems, and proofs to bound and reason about ML models. To my knowledge, there isn’t really anything like it for large networks; maybe someday.