With the new emerging throughput-intensive ultralow latency applications, there is a need for a transport layer protocol that can achieve high throughput with low latency. One promising candidate is TCP BBR, a protocol developed by Google, with the aim of achieving high throughput and low latency by operating around the Bandwidth Delay Product (BDP) of the bottleneck link. Google reported significant throughput gains and much lower latency relative to TCP Cubic following the deployment of BBR in their high-speed wide area wired network. As most of these emerging applications will be supported by Millimeter Wave (mmWave) wireless networks, BBR should achieve both high throughput and ultra-low latency in these settings. However, in our preliminary experiments with BBR over a mmWave wireless link operating at 60 GHz, we observed a severe degradation in throughput that we were able to attribute to high delay variation on the link. In this paper, we show that 'throughput collapse' occurs when BBR's estimate of minimum RTT is less than half of the average RTT of the uncongested link (as when delay jitter is large). We demonstrate this phenomenon and explain the underlying reasons for it using a series of controlled experiments on the CloudLab testbed. We also present a mathematical analysis of BBR, which matches our experimental results closely. Based on our analysis, we propose and experimentally evaluate potential solutions that can overcome the throughput collapse without addina sianificant latency.