Deep neural networks are being used in disparate VLSI design automation tasks, including layout printability estimation, mask optimization, and routing congestion analysis. Preliminary results show the power of deep learning as an alternate solution in state-of-theart design and sign-off flows. However, deep learning is vulnerable to adversarial attacks. In this paper, we examine the risk of state-ofthe- art deep learning-based layout hotspot detectors under practical attack scenarios. We show that legacy gradient-based attacks do not adequately consider the design rule constraints. We present an innovative adversarial attack formulation to attack the layout clips and propose a fast group gradient method to solve it. Experiments show that the attack can deceive the deep neural networks using small perturbations in clips which preserve layout functionality while meeting the design rules. The source code is available at https://github.com/phdyang007/dlhsd/tree/dct_as_conv.