George Hotz | Programming | tinygrad: neural engine on M1? | Science & Technology | Apple M1 | Part4



Date of stream 21 Nov 2020. Live-stream chat added as Subtitles/CC – English (Twitch Chat). Stream title: tinygrad: neural engine on M1? Source files: …

26 Comments

  1. You can swap around what ís adjustable in neural networks. Fixed dot products (enacted by fast transforms) and adjustable parametric activation functions. To stop the first transform from taking the spectrum you can apply a pre-decided random pattern of sign flips to the input data. The net is then sign flips, transform, activation functions, transform, activation functions…. transform. The fast Walsh Hadamard transform is good. fi(x)=x.ai x<0, fi(x)=x.bi x>=0 i=0 to n-1 for the activation functions. No need a bias term.

Leave a Reply

Your email address will not be published.


*