Speaker: Adam Celarek (Inst. 193-02 CG)

Deep learning is becoming an ubiquitous technology in computer science. Like in programming languages, we use abstractions to hide the complexity of the machinery, that does the heavy lifting. And just like software engineering benefits from understanding the inner workings of a processor, the development of deep learning benefits from understanding of its building blocks: Neurons, activation functions, cost function and back propagation. This talk will only very briefly cover the former three and will focus on automatic differentiation (AD) as a method for back propagation. AD is neither symbolic nor numerical differentiation, but a third, somewhat orthogonal method. I will demo a working, fully connected DNN using my own implementations of AD, gradient descend, transfer and cost functions. The code is available on https://github.com/cg-tuwien/deep_learning_demo

Details

Category

Duration

30 + 10