Traffic Engineering (TE) has been used by Internet service providers to improve their network performance and provide better service quality to users. While flow-based TE is an alternative, destination-based TE is a more readily deployed solution. This is because destination-based forwarding is ubiquitously supported by today's routers. A challenge faced by state-of-the-art destination-based TE solutions is considerable time taken by a centralized controller to update traffic split ratios for each entry of the forwarding table of each router. This could impose a fundamental limitation on how responsively the network can react to dynamic changes of traffic demands. In this paper, we propose SmartEntry, a destination-based routing solution coupled with Reinforcement Learning (RL) to reduce the number of the forwarding entries that need to be updated to respond to dynamic change of traffic demands. SmartEntry forwards majority traffic on Equal-Cost Multi-Path (ECMP) and redistributes a small portion of traffic using our proposed RL algorithm. SmartEntry adopts Linear Programming (LP) to produce reward signals. This RL + LP combined approach turns out to be surprisingly effective. We evaluate SmartEntry by conducting extensive experiments on different network topologies with both real and synthesized traffic. The simulation results show that SmartEntry achieves near-optimal performance with a saving of 90% forwarding entry updates, and generalizes well to unseen traffic matrices.