Grund der fragestellung ist folgendes:
hierdrin (https://missinglink.ai/guides/computer-vision/complete-guide-deep-learning-gpus/) steht:
"
Option A) wäre eine 2080Ti (1.200€)
Option B) 2x 2070 Super (2x500€)
Quelle: https://www.studio1productions.com/Articles/NVidia-GPU-Chart.htm
Meine erste Frage ist:
1) Inwieweit "stacken" GPUs in bezug auf Kerne, Bandbreite, VRAM (in Keras).
2) Was macht mehr Sinn eurer Erfahrung nach im Bezug aus Deep Learning?
hierdrin (https://missinglink.ai/guides/computer-vision/complete-guide-deep-learning-gpus/) steht:
"
- Memory bandwidth is the most important characteristic of a GPU. Opt for a GPU with the highest bandwidth available within your budget.
- The number of cores determines the speed at which the GPU can process data, the higher the number of cores, the faster the GPU can compute data. Consider this especially when dealing with large amounts of data.
- Video RAM size (VRAM) is a measurement of how much data the GPU can handle at once. The amount of VRAM required depends on your tasks so plan accordingly.
- Processing power is a factor of the number of cores inside the GPU multiplied by the clock speed at which they run. The processing power indicates the speed at which your GPU can compute data and determines how fast your system will perform tasks.
- If you are running light tasks like small or simple deep learning models, you can use a low-end GPU like Nvidia’s GTX 1030.
- If you are handling complex tasks such as neural networks training you should equip your system with a high-end GPU like Nvidia’s RTX 2080 TI or even its most powerful Titan lineup. Alternatively, you can use a cloud service like Google’s GCP or Amazon’s AWS which provides strong GPU capabilities.
- If you are working on highly demanding tasks such as multiple simultaneous experiments or require on-premise GPU parallelism, then no matter how high end your GPU is, one GPU won’t be enough. In this case, you should purchase a system designed for multi-GPU computing."
Option A) wäre eine 2080Ti (1.200€)
Option B) 2x 2070 Super (2x500€)
NVIDIA Card | Number of CUDA Cores | Memory Interface Width | Memory Bandwidth GB/sec | Base Clock Speed | Boost Clock Speed | NOTES |
RTX-2060 | 1920 | 192 bit | 336 GB/s | 1365 MHz | 1680 MHz | 6 GB VRAM |
RTX-2060 Super | 2176 | 256 bit | 448 GB/s | 1470 MHz | 1650 MHz | 8 GB VRAM |
RTX-2070 | 2304 | 256 bit | 448 GB/s | 1410 MHz | 1620 MHz | 8 GB VRAM |
RTX-2070 Super | 2560 | 256 bit | 448 GB/s | 1605 MHz | 1770 MHz | 8 GB VRAM |
RTX-2080 | 2944 | 256 bit | 448 GB/s | 1515 MHz | 1710 MHz | 8 GB VRAM |
RTX-2080 Super | 3072 | 256 bit | 496 GB/s | 1650 MHz | 1815 MHz | 8 GB VRAM |
RTX-2080 Ti | 4352 | 352 bit | 616 GB/s | 1350 MHz | 1545 MHz | 11 GB VRAM |
Quelle: https://www.studio1productions.com/Articles/NVidia-GPU-Chart.htm
Meine erste Frage ist:
1) Inwieweit "stacken" GPUs in bezug auf Kerne, Bandbreite, VRAM (in Keras).
2) Was macht mehr Sinn eurer Erfahrung nach im Bezug aus Deep Learning?