Keyphrases
Warehouse Scale Computer
100%
Sirius
75%
Intelligent Assistant
53%
Sparse Matrix multiplication
50%
Memory Reconfiguration
50%
Personal Assistant
50%
Implications for Future
50%
Collaborative Intelligence
50%
Apple Siri
32%
Open-end
28%
Off-chip Bandwidth
25%
Fast Architecture
25%
Reconfigurable Memory
25%
Mobile Devices
25%
Edge Cloud
25%
Pass Transistor Logic
25%
Design Futures
25%
Neurosurgeon
25%
Synthesizable PLL
25%
Mobile Edge
25%
Celerity
25%
Computation Partitioning
25%
Open-source EDA
25%
Server Architecture
21%
Total Cost of Ownership
21%
Google
20%
CPU-GPU
19%
Microsoft Cortana
19%
Latency
18%
Deep Neural Network
18%
Server Design
14%
Natural Language
13%
Artificial Intelligence
12%
Energy Efficiency Improvement
12%
Memory Hierarchy
12%
Scratchpad
12%
Cloud Services
12%
Cloud Architecture
12%
Router Node
12%
Sparse Matrices
12%
Coremark
12%
Wireless Networks
12%
Cortex-M4
12%
Caching
12%
Graph-based
12%
Intelligent Applications
12%
On-chip Memory
12%
User Query
12%
Mobile Energy
12%
Fine-grained Layer
12%
Computer Science
Warehouse-Scale Computer
100%
Personal Assistant
80%
Matrix Multiplication
50%
Networks on Chips
50%
Phase Locked Loop
50%
Reconfiguration
50%
Energy Efficiency
37%
Open Source
35%
Total Cost of Ownership
30%
Server Architecture
30%
Computation Partitioning
30%
Open Source Tool
25%
User Perspective
25%
Critical Path
25%
Memory Hierarchy
25%
Memory Access
25%
Bandwidth Efficiency
25%
Deep Neural Network
25%
Open Source Community
25%
Artificial Intelligence
25%
Chip Architecture
20%
Mobile Device
20%
Point Multiplication
12%
Coprocessor
10%
Wireless Networks
10%
Service Application
10%
Web Service
10%
Global Address Space
8%
Time Development
8%
Type Transistor
6%
Hardware Platform
5%
Artificial Neural Network
5%
Machine Learning Technique
5%
Computer Vision
5%
Computational Resource
5%
Development Platform
5%
Layer Neural Network
5%
Energy Consumption
5%
Lower Energy Consumption
5%
Granularity
5%
Neural Network Architecture
5%
Energy Efficient
5%
Engineering
Matrix Multiplication
53%
Reconfiguration
50%
Processing Unit
25%
Memory Access
25%
Systolic Arrays
25%
Memory Hierarchy
25%
Graphics Processing Unit
25%
Chip Memory
25%
Energy Conservation
25%
Energy Efficiency
25%
Bandwidth Efficiency
25%
Matrix Operation
7%
Load Capacitance
5%