Ambisonics is a scene based representation of recorded or synthesized 3-dimensional sound scenes that allows for efficient spatial transformations such as sound field rotations and directional loudness modifications; hence, it has been widely adopted for conveying immersive audio experiences via headphone based reproduction. It is well known, however, that the inherent reduction of spatial resolution of order-limited Ambisonic scenes leads to blurred source images, reduced spaciousness and direction dependent timbral artifacts when rendered for binaural playback. This is due to the fact that head related transfer functions (HRTFs) contain significant energy in modal orders as high as 30, while typical Ambisonic orders are much lower in practice, e.g. when recorded sound fields are considered, only first order Ambisonic signals might be available. Traditional binaural rendering methods adopt decoding strategies from loudspeaker-based Ambisonics reproduction. Depending on the number and position of the virtual loudspeakers, these methods typically exhibit either timbral artifacts due to dense spatial sampling, or strongly direction dependent errors of binaural cues due to aliasing effects stemming from spatial undersampling. Addressing these issues, more recent rendering schemes employ HRTF preprocessing to reduce the required modal order, adopt alternative loss functions for the filter design, or enforce diffuse field constraints. In this workshop we will discuss and compare classic and modern binaural rendering schemes concerning signal colourations, localization and externalization. As signal independent methods fail to achieve near transparent reproduction for Ambisonic orders as low as 1 or 2, we will also discuss signal-dependent rendering methods, where estimated sound field parameters are used to increase the quality of the reproduced signals.