In this paper, we demonstrate the vulnerabilities of learning-based algorithms for perception-based autonomy as applied to unmanned aerial systems (UAS). We present an attack framework to compromise the controls of a UAS, which uses a deep neural network (DNN) to track different types of targets in varying terrains. The DNN uses the UAS camera image as input to estimate the six degree-of-freedom (6D) pose of the target relative to the UAS. The UAS flight-controller uses the estimated relative 6D pose to output the motor throttle commands to maintain a specified relative height while keeping the target in the center of the camera's frame. The attacker's objective is to reduce the height of the UAS relative to the target in a controlled fashion (e.g., so as to obtain unauthorized physical access to the UAS). This is achieved by dynamically modifying the texture of the target in the environment. To this end, the attacker uses a digital poster to generate the adversarial tracking target remotely. The digital poster has an integrated camera to estimate the UAS pose relative to the poster. The digital poster continuously adapts to the UAS pose while being robust to lighting variations. The attack is effectively demonstrated on the Virtual Robotics Experimentation Platform (V-REP) in a custom-developed environment testbed.