## Abstract

We consider the learning of algorithmic tasks by mere observation of input-output pairs. Rather than studying this as a black-box discrete regression problem with no assumption whatsoever on the input-output mapping, we concentrate on tasks that are amenable to the principle of divide and conquer, and study what are its implications in terms of learning. This principle creates a powerful inductive bias that we leverage with neural architectures that are defined recursively and dynamically, by learning two scale-invariant atomic operations: how to split a given input into smaller sets, and how to merge two partially solved tasks into a larger partial solution. Our model can be trained in weakly supervised environments, namely by just observing input-output pairs, and in even weaker environments, using a non-differentiable reward signal. Moreover, thanks to the dynamic aspect of our architecture, we can incorporate the computational complexity as a regularization term that can be optimized by backpropagation. We demonstrate the flexibility and efficiency of the Divide- and-Conquer Network on several combinatorial and geometric tasks: convex hull, clustering, knapsack and euclidean TSP. Thanks to the dynamic programming nature of our model, we show significant improvements in terms of generalization error and computational complexity.

Original language | English (US) |
---|---|

State | Published - Jan 1 2018 |

Event | 6th International Conference on Learning Representations, ICLR 2018 - Vancouver, Canada Duration: Apr 30 2018 → May 3 2018 |

### Conference

Conference | 6th International Conference on Learning Representations, ICLR 2018 |
---|---|

Country | Canada |

City | Vancouver |

Period | 4/30/18 → 5/3/18 |

## ASJC Scopus subject areas

- Language and Linguistics
- Education
- Computer Science Applications
- Linguistics and Language