Optimizing behaviors for dexterous manipulation has been a longstanding challenge in robotics, with a variety of methods from model-based control to model-free reinforcement learning having been previously explored in literature. Such prior work often require extensive trial-and-error training along with task-specific tuning of reward functions, which makes applying dexterous manipulation for general purpose problems quite impractical. A sample-efficient and practical alternate to trial-and-error learning is imitation learning. However, collecting and learning from demonstrations in dexterous manipulation is quite challenging due to the high-dimensional action-space involved with multi-finger control. In this work, we propose 'Dexterous Imitation Made Easy' (DIME) a new imitation learning framework for dexterous manipulation. DIME only requires a single RGB camera that observes a human operator to teleoperate a robotic hand. Once demonstrations are collected, DIME employs state-of-the-art imitation learning methods to train dexterous manipulation policies. On real robot benchmarks we demonstrate that DIME can be used to solve complex, in-hand manipulation tasks such as 'flipping', 'spinning', and 'rotating' objects with just 30 demonstrations and no additional robot training. Our code, pre-collected demonstrations, and robot videos are publicly available at: https://nyu-robot-learning.github.io/dime.