Manufacturer | Nvidia |
---|---|
Introduced | May 2, 2007 |
Discontinued | May 2020 |
Type | General purpose graphics cards |
Nvidia Tesla was the name of Nvidia's line of products targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 series, and have continued to accompany the release of new chips. They are programmable using the CUDA or OpenCL APIs.
The Nvidia Tesla product line competed with AMD's Radeon Instinct and Intel Xeon Phi lines of deep learning and GPU cards.
Nvidia retired the Tesla brand in May 2020, reportedly because of potential confusion with the brand of cars.[1] Its new GPUs are branded Nvidia Data Center GPUs,[2] as in the Ampere A100 GPU.[3]
Overview
Offering computational power much greater than traditional microprocessors, the Tesla products targeted the high-performance computing market.[4] As of 2012, Nvidia Teslas power some of the world's fastest supercomputers, including Summit at Oak Ridge National Laboratory and Tianhe-1A, in Tianjin, China.
Tesla cards have four times the double precision performance of a Fermi-based Nvidia GeForce card of similar single precision performance. Unlike Nvidia's consumer GeForce cards and professional Nvidia Quadro cards, Tesla cards were originally unable to output images to a display. However, the last Tesla C-class products included one Dual-Link DVI port.[5]
As part of Project Denver, Nvidia intends to embed ARMv8 processor cores in its GPUs.[6] This will be a 64-bit follow-up to the 32-bit Tegra chips.
The Tesla P100 uses TSMC's 16 nanometer FinFET semiconductor manufacturing process, which is more advanced than the 28-nanometer process previously used by AMD and Nvidia GPUs between 2012 and 2016. The P100 also uses Samsung's HBM2 memory.[7]
Applications
Tesla products are primarily used in simulations and in large-scale calculations (especially floating-point calculations), and for high-end image generation for professional and scientific fields.[8]
In 2013, the defense industry accounted for less than one-sixth of Tesla sales, but Sumit Gupta predicted increasing sales to the geospatial intelligence market.[9]
Specifications
Model | Micro- architecture |
Launch | Chips | Core clock (MHz) |
Shaders | Memory | Processing power (GFLOPS)[lower-alpha 1] | CUDA compute capability[lower-alpha 2] |
TDP (watts) |
Notes, form factor | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Cuda cores (total) |
Base clock (MHz) | Max boost clock (MHz)[lower-alpha 3] |
Bus type | Bus width (bit) |
Size (GB) |
Clock (MT/s) |
Bandwidth (GB/s) |
Half precision Tensor Core FP32 Accumulate |
Single precision (MAD or FMA) |
Double precision (FMA) | ||||||||
Units | MHz | MHz | W | |||||||||||||||
C870 GPU Computing Module[lower-alpha 4] | Tesla | May 2, 2007 | 1× G80 | 600 | 128 | 1,350 | — | GDDR3 | 384 | 1.5 | 1,600 | 76.8 | No | 345.6 | No | 1.0 | 170.9 | Internal PCIe GPU (full-height, dual-slot) |
D870 Deskside Computer[lower-alpha 4] | May 2, 2007 | 2× G80 | 600 | 256 | 1,350 | — | GDDR3 | 2× 384 | 2× 1.5 | 1,600 | 2× 76.8 | No | 691.2 | No | 1.0 | 520 | Deskside or 3U rack-mount external GPUs | |
S870 GPU Computing Server[lower-alpha 4] | May 2, 2007 | 4× G80 | 600 | 512 | 1,350 | — | GDDR3 | 4× 384 | 4× 1.5 | 1,600 | 4× 76.8 | No | 1382.4 | No | 1.0 | 1U rack-mount external GPUs, connect via 2× PCIe (×16) | ||
C1060 GPU Computing Module[lower-alpha 5] | April 9, 2009 | 1× GT200 | 602 | 240 | 1,296[11] | — | GDDR3 | 512 | 4 | 1,600 | 102.4 | No | 622.08 | 77.76 | 1.3 | 187.8 | Internal PCIe GPU (full-height, dual-slot) | |
S1070 GPU Computing Server "400 configuration"[lower-alpha 5] | June 1, 2008 | 4× GT200 | 602 | 960 | 1296 | — | GDDR3 | 4× 512 | 4× 4 | 1,538.4 | 4× 98.5 | No | 2,488.3 | 311.0 | 1.3 | 800 | 1U rack-mount external GPUs, connect via 2× PCIe (×8 or ×16) | |
S1070 GPU Computing Server "500 configuration"[lower-alpha 5] | 1,440 | — | No | 2,764.8 | 345.6 | |||||||||||||
S1075 GPU Computing Server[lower-alpha 5][12] | June 1, 2008 | 4× GT200 | 602 | 960 | 1,440 | — | GDDR3 | 4× 512 | 4× 4 | 1,538.4 | 4× 98.5 | No | 2,764.8 | 345.6 | 1.3 | 1U rack-mount external GPUs, connect via 1× PCIe (×8 or ×16) | ||
Quadro Plex 2200 D2 Visual Computing System[lower-alpha 6] | July 25, 2008 | 2× GT200GL | 648 | 480 | 1,296 | — | GDDR3 | 2× 512 | 2× 4 | 1,600 | 2× 102.4 | No | 1,244.2 | 155.5 | 1.3 | Deskside or 3U rack-mount external GPUs with 4 dual-link DVI outputs | ||
Quadro Plex 2200 S4 Visual Computing System[lower-alpha 6] | July 25, 2008 | 4× GT200GL | 648 | 960 | 1,296 | — | GDDR3 | 4× 512 | 4× 4 | 1,600 | 4× 102.4 | No | 2,488.3 | 311.0 | 1.3 | 1,200 | 1U rack-mount external GPUs, connect via 2× PCIe (×8 or ×16) | |
C2050 GPU Computing Module[13] | Fermi | July 25, 2011 | 1× GF100 | 575 | 448 | 1,150 | — | GDDR5 | 384 | 3[lower-alpha 7] | 3000 | 144 | No | 1,030.4 | 515.2 | 2.0 | 247 | Internal PCIe GPU (full-height, dual-slot) |
M2050 GPU Computing Module[14] | July 25, 2011 | — | 3,092 | 148.4 | No | 225 | ||||||||||||
C2070 GPU Computing Module[13] | July 25, 2011 | 1× GF100 | 575 | 448 | 1,150 | — | GDDR5 | 384 | 6[lower-alpha 7] | 3,000 | 144 | No | 1,030.4 | 515.2 | 2.0 | 247 | Internal PCIe GPU (full-height, dual-slot) | |
C2075 GPU Computing Module[15] | July 25, 2011 | — | 3,000 | 144 | No | 225 | ||||||||||||
M2070/M2070Q GPU Computing Module[16] | July 25, 2011 | — | 3,132 | 150.336 | No | 225 | ||||||||||||
M2090 GPU Computing Module[17] | July 25, 2011 | 1× GF110 | 650 | 512 | 1,300 | — | GDDR5 | 384 | 6[lower-alpha 7] | 3700 | 177.6 | No | 1,331.2 | 665.6 | 2.0 | 225 | Internal PCIe GPU (full-height, dual-slot) | |
S2050 GPU Computing Server | July 25, 2011 | 4× GF100 | 575 | 1792 | 1150 | — | GDDR5 | 4× 384 | 4× 3[lower-alpha 7] | 3 | 4× 148.4 | No | 4,121.6 | 2,060.8 | 2.0 | 900 | 1U rack-mount external GPUs, connect via 2× PCIe (×8 or ×16) | |
S2070 GPU Computing Server | — | 4× 6[lower-alpha 7] | No | |||||||||||||||
K10 GPU accelerator[18] | Kepler | May 1, 2012 | 2× GK104 | — | 3,072 | 745 | ? | GDDR5 | 2× 256 | 2× 4 | 5,000 | 2× 160 | No | 4,577 | 190.7 | 3.0 | 225 | Internal PCIe GPU (full-height, dual-slot) |
K20 GPU accelerator[19][20] | November 12, 2012 | 1× GK110 | — | 2,496 | 706 | 758 | GDDR5 | 320 | 5 | 5,200 | 208 | No | 3,524 | 1,175 | 3.5 | 225 | Internal PCIe GPU (full-height, dual-slot) | |
K20X GPU accelerator[21] | November 12, 2012 | 1× GK110 | — | 2,688 | 732 | ? | GDDR5 | 384 | 6 | 5,200 | 250 | No | 3,935 | 1,312 | 3.5 | 235 | Internal PCIe GPU (full-height, dual-slot) | |
K40 GPU accelerator[22] | October 8, 2013 | 1× GK110B | — | 2,880 | 745 | 875 | GDDR5 | 384 | 12[lower-alpha 7] | 6,000 | 288 | No | 4,291–5,040 | 1,430–1,680 | 3.5 | 235 | Internal PCIe GPU (full-height, dual-slot) | |
K80 GPU accelerator[23] | November 17, 2014 | 2× GK210 | — | 4,992 | 560 | 875 | GDDR5 | 2× 384 | 2× 12 | 5,000 | 2× 240 | No | 5,591–8,736 | 1,864–2,912 | 3.7 | 300 | Internal PCIe GPU (full-height, dual-slot) | |
M4 GPU accelerator[24][25] | Maxwell | November 10, 2015 | 1× GM206 | — | 1,024 | 872 | 1,072 | GDDR5 | 128 | 4 | 5,500 | 88 | No | 1,786–2,195 | 55.81–68.61 | 5.2 | 50–75 | Internal PCIe GPU (half-height, single-slot) |
M6 GPU accelerator[26] | August 30, 2015 | 1× GM204-995-A1 | — | 1536 | 722 | 1,051 | GDDR5 | 256 | 8 | 4,600 | 147.2 | No | 2,218–3,229 | 69.3–100.9 | 5.2 | 75–100 | Internal MXM GPU | |
M10 GPU accelerator[27] | 4× GM107 | — | 2,560 | 1,033 | ? | GDDR5 | 4× 128 | 4× 8 | 5,188 | 4× 83 | No | 5,289 | 165.3 | 5.2 | 225 | Internal PCIe GPU (full-height, dual-slot) | ||
M40 GPU accelerator[25][28] | November 10, 2015 | 1× GM200 | — | 3,072 | 948 | 1,114 | GDDR5 | 384 | 12 or 24 | 6,000 | 288 | No | 5,825–6,844 | 182.0–213.9 | 5.2 | 250 | Internal PCIe GPU (full-height, dual-slot) | |
M60 GPU accelerator[29] | August 30, 2015 | 2× GM204-895-A1 | — | 4,096 | 899 | 1,178 | GDDR5 | 2× 256 | 2× 8 | 5,000 | 2× 160 | No | 7,365–9,650 | 230.1–301.6 | 5.2 | 225–300 | Internal PCIe GPU (full-height, dual-slot) | |
P4 GPU accelerator[30] | Pascal | September 13, 2016 | 1× GP104 | — | 2,560 | 810 | 1,063 | GDDR5 | 256 | 8 | 6,000 | 192.0 | No | 4,147–5,443 | 129.6–170.1 | 6.1 | 50-75 | PCIe card |
P6 GPU accelerator[31][32] | March 24, 2017 | 1× GP104-995-A1 | — | 2,048 | 1,012 | 1,506 | GDDR5 | 256 | 16 | 3,003 | 192.2 | No | 6,169 | 192.8 | 6.1 | 90 | MXM card | |
P40 GPU accelerator[30] | September 13, 2016 | 1× GP102 | — | 3,840 | 1,303 | 1,531 | GDDR5 | 384 | 24 | 7,200 | 345.6 | No | 10,007–11,758 | 312.7–367.4 | 6.1 | 250 | PCIe card | |
P100 GPU accelerator (mezzanine)[33][34] | April 5, 2016 | 1× GP100-890-A1 | — | 3,584 | 1,328 | 1,480 | HBM2 | 4,096 | 16 | 1,430 | 732 | No | 9,519–10,609 | 4,760–5,304 | 6.0 | 300 | SXM card | |
P100 GPU accelerator (16 GB card)[35] | June 20, 2016 | 1× GP100 | — | 1126 | 1303 | No | 8,071‒9,340 | 4,036‒4,670 | 250 | PCIe card | ||||||||
P100 GPU accelerator (12 GB card)[35] | June 20, 2016 | — | 3,072 | 12 | 549 | No | 8,071‒9,340 | 4,036‒4,670 | ||||||||||
V100 GPU accelerator (mezzanine)[36][37][38] | Volta | May 10, 2017 | 1× GV100-895-A1 | — | 5120 | Unknown | 1,455 | HBM2 | 4,096 | 16 or 32 | 1,750 | 900 | 119,192 | 14,899 | 7,450 | 7.0 | 300 | SXM card |
V100 GPU accelerator (PCIe card)[36][37][38] | June 21, 2017 | 1× GV100 | — | Unknown | 1,370 | 112,224 | 14,028 | 7,014 | 250 | PCIe card | ||||||||
V100 GPU accelerator (PCIe FHHL card) | March 27, 2018 | 1× GV100 | — | 937 | 1,290 | 16 | 1,620 | 829.44 | 105,680 | 13,210 | 6,605 | 250 | PCIe FHHL card | |||||
T4 GPU accelerator (PCIe card)[39][40] | Turing | September 12, 2018 | 1× TU104-895-A1 | — | 2,560 | 585 | 1,590 | GDDR6 | 256 | 16 | 5,000 | 320 | 64,800 | 8,100 | Unknown | 7.5 | 70 | PCIe card |
A2 GPU accelerator (PCIe card)[41] | Ampere | November 10, 2021 | 1× GA107 | — | 1,280 | 1,440 | 1,770 | GDDR6 | 128 | 16 | 6,252 | 200 | 18,124 | 4,531 | 140 | 8.6 | 40-60 | PCIe card (half height, single-slot) |
A10 GPU accelerator (PCIe card)[42] | April 12, 2021 | 1× GA102-890-A1 | — | 9,216 | 885 | 1,695 | GDDR6 | 384 | 24 | 6,252 | 600 | 124,960 | 31,240 | 976 | 8.6 | 150 | PCIe card (single-slot) | |
A16 GPU accelerator (PCIe card)[43] | April 12, 2021 | 4× GA107 | — | 4× 1,280 | 885 | 1,695 | GDDR6 | 4× 128 | 4× 16 | 7,242 | 4× 200 | 4x 18,432 | 4× 4,608 | 1,084.8 | 8.6 | 250 | PCIe card (dual-slot) | |
A30 GPU accelerator (PCIe card)[44] | April 12, 2021 | 1× GA100 | — | 3,584 | 930 | 1,440 | HBM2 | 3,072 | 24 | 1,215 | 933.1 | 165,120 | 10,320 | 5,161 | 8.0 | 165 | PCIe card (dual-slot) | |
A40 GPU accelerator (PCIe card)[45] | October 5, 2020 | 1× GA102 | — | 10,752 | 1,305 | 1,740 | GDDR6 | 384 | 48 | 7,248 | 695.8 | 149,680 | 37,420 | 1,168 | 8.6 | 300 | PCIe card (dual-slot) | |
A100 GPU accelerator (PCIe card)[46][47] | May 14, 2020[48] | 1× GA100-883AA-A1 | — | 6,912 | 765 | 1410 | HBM2 | 5,120 | 40 or 80 | 1,215 | 1,555 | 312,000 | 19,500 | 9,700 | 8.0 | 250 | PCIe card (dual-slot) | |
H100 GPU accelerator (PCIe card)[49] | Hopper | March 22, 2022[50] | 1× GH100[51] | — | 14,592 | 1,065 | 1,755 CUDA 1620 TC | HBM2e | 5120 | 80 | 1,000 | 2,039 | 756,449 | 51,200 | 25,600 | 9.0 | 350 | PCIe card (dual-slot) |
H100 GPU accelerator (SXM card) | — | 16,896 | 1,065 | 1,980 CUDA 1,830 TC | HBM3 | 5,120 | 80 | 1,500 | 3,352 | 989,430 | 66,900 | 33,500 | 9.0 | 700 | SXM card | |||
GH200 Superchip (SXM card) | August 8, 2023 | 1× GH100 1x Arm Neoverse V2 |
— | 16,896 | 1,065 | 1,980 CUDA 1,830 TC | HBM3e | 6,144 | 96- 144 |
— | 4,900 | 989,430 | 66,900 | 33,500 | 9.0 | 450-1000 | SXM card | |
L40 GPU accelerator[52] | Ada Lovelace | October 13, 2022 | 1× AD102[53] | — | 18,176 | 735 | 2,490 | GDDR6 | 384 | 48 | 2,250 | 864 | 362,066 | 90,516 | 1,414 | 8.9 | 300 | PCIe card (dual-slot) |
L4 GPU accelerator[54][55] | March 21, 2023[56] | 1x AD104[57] | — | 7,424 | 795 | 2,040 | GDDR6 | 192 | 24 | 1,563 | 300 | 121,000 | 30,300 | 490 | 8.9 | 72 | HHHL single slot PCIe card | |
Model | Micro- architecture |
Launch | Chips | Core clock (MHz) |
Shaders | Memory | Processing power (GFLOPS)[lower-alpha 1] | CUDA compute capability |
TDP (watts) |
Notes, form factor | ||||||||
Cuda cores (total) |
Base clock (MHz) | Max boost clock (MHz)[lower-alpha 3] |
Bus type | Bus width (bit) |
Size (GB) |
Clock (MT/s) |
Bandwidth (GB/s) |
Half precision Tensor Core FP32 Accumulate |
Single precision (MAD or FMA) |
Double precision (FMA) |
Notes
- 1 2 To calculate the processing power see Tesla (microarchitecture)#Performance, Fermi (microarchitecture)#Performance, Kepler (microarchitecture)#Performance, Maxwell (microarchitecture)#Performance, or Pascal (microarchitecture)#Performance. A number range specifies the minimum and maximum processing power at, respectively, the base clock and maximum boost clock.
- ↑ Core architecture version according to the CUDA programming guide.
- 1 2 GPU Boost is a default feature that increases the core clock rate while remaining under the card's predetermined power budget. Multiple boost clocks are available, but this table lists the highest clock supported by each card.[10]
- 1 2 3 Specifications not specified by Nvidia assumed to be based on the GeForce 8800 GTX
- 1 2 3 4 Specifications not specified by Nvidia assumed to be based on the GeForce GTX 280
- 1 2 Specifications not specified by Nvidia assumed to be based on the Quadro FX 5800
- 1 2 3 4 5 6 With ECC on, a portion of the dedicated memory is used for ECC bits, so the available user memory is reduced by 12.5%. (e.g. 4 GB total memory yields 3.5 GB of user available memory.)
See also
References
- ↑ Casas, Alex (19 May 2020). "NVIDIA Drops Tesla Brand To Avoid Confusion With Tesla". Wccftech. Retrieved 8 July 2020.
- ↑ "NVIDIA Supercomputing Solutions".
- ↑ "NVIDIA A100 GPUs Power the Modern Data Center". NVIDIA. Retrieved 8 July 2020.
- ↑ "High Performance Computing - Supercomputing with Tesla GPUs".
- ↑ "Professional Workstation Solutions".
- ↑ "Nvidia to Integrate ARM Processors in Tesla". 1 November 2012.
- ↑ Walton, Mark (6 April 2016). "Nvidia unveils first Pascal graphics card, the monstrous Tesla P100". Ars Technica. Retrieved 19 June 2019.
- ↑ Tesla Technical Brief (PDF)
- ↑ "Nvidia chases defense, intelligence ISVs with GPUs". www.theregister.com. Retrieved 8 July 2020.
- ↑ "Nvidia GPU Boost For Tesla" (PDF). January 2014. Retrieved 7 December 2015.
- ↑ "Tesla C1060 Computing Processor Board" (PDF). Nvidia.com. Retrieved 11 December 2015.
- ↑ "Difference between Tesla S1070 and S1075". 31 October 2008. Retrieved 29 January 2017.
S1075 has one interface card
- 1 2 "Tesla C2050 and Tesla C2070 Computing Processor" (PDF). Nvidia.com. Retrieved 11 December 2015.
- ↑ "Tesla M2050 and Tesla M2070/M2070Q Dual-Slot Computing Processor Modules" (PDF). Nvidia.com. Retrieved 11 December 2015.
- ↑ "Tesla C2075 Computing Processor Board" (PDF). Nvidia.com. Retrieved 11 December 2015.
- ↑ Hand, Randall (23 August 2010). "NVidia Tesla M2050 & M2070/M2070Q Specs OnlineVizWorld.com". VizWorld.com. Retrieved 11 December 2015.
- ↑ "Tesla M2090 Dual-Slot Computing Processor Module" (PDF). Nvidia.com. Retrieved 11 December 2015.
- ↑ "Tesla K10 GPU accelerator" (PDF). Nvidia.com. Retrieved 11 December 2015.
- ↑ "Tesla K20 GPU active accelerator" (PDF). Nvidia.com. Retrieved 11 December 2015.
- ↑ "Tesla K20 GPU accelerator" (PDF). Nvidia.com. Retrieved 11 December 2015.
- ↑ "Tesla K20X GPU accelerator" (PDF). Nvidia.com. Retrieved 11 December 2015.
- ↑ "Tesla K40 GPU accelerator" (PDF). Nvidia.com. Retrieved 11 December 2015.
- ↑ "Tesla K80 GPU accelerator" (PDF). Images.nvidia.com. Retrieved 11 December 2015.
- ↑ "Nvidia Announces Tesla M40 & M4 Server Cards - Data Center Machine Learning". Anandtech.com. Retrieved 11 December 2015.
- 1 2 "Accelerating Hyperscale Datacenter Applications with Tesla GPUs | Parallel Forall". Devblogs.nvidia.com. 10 November 2015. Retrieved 11 December 2015.
- ↑ "Tesla M6" (PDF). Images.nvidia.com. Retrieved 28 May 2016.
- ↑ "Tesla M10" (PDF). Images.nvidia.com. Retrieved 29 October 2016.
- ↑ "Tesla M40" (PDF). Images.nvidia.com. Retrieved 11 December 2015.
- ↑ "Tesla M60" (PDF). Images.nvidia.com. Retrieved 27 May 2016.
- 1 2 Smith, Ryan (13 September 2016). "Nvidia Announces Tesla P40 & Tesla P4 - Network Inference, Big & Small". Anandtech. Retrieved 13 September 2016.
- ↑ "Tesla P6" (PDF). www.nvidia.com. Retrieved 7 March 2019.
- ↑ "Tesla P6 Specs". www.techpowerup.com. Retrieved 7 March 2019.
- ↑ Smith, Ryan (5 April 2016). "Nvidia Announces Tesla P100 Accelerator - Pascal GP100 for HPC". Anandtech.com. Anandtech.com. Retrieved 5 April 2016.
- ↑ Harris, Mark. "Inside Pascal: Nvidia's Newest Computing Platform". Retrieved 13 September 2016.
- 1 2 Smith, Ryan (20 June 2016). "NVidia Announces PCI Express Tesla P100". Anandtech.com. Retrieved 21 June 2016.
- 1 2 Smith, Ryan (10 May 2017). "The Nvidia GPU Technology Conference 2017 Keynote Live Blog". Anandtech. Retrieved 10 May 2017.
- 1 2 Smith, Ryan (10 May 2017). "NVIDIA Volta Unveiled: GV100 GPU and Tesla V100 Accelerator Announced". Anandtech. Retrieved 10 May 2017.
- 1 2 Oh, Nate (20 June 2017). "NVIDIA Formally Announces V100: Available later this Year". Anandtech.com. Retrieved 20 June 2017.
- ↑ "NVIDIA TESLA T4 TENSOR CORE GPU". NVIDIA. Retrieved 17 October 2018.
- ↑ "NVIDIA Tesla T4 Tensor Core Product Brief" (PDF). www.nvidia.com. Retrieved 10 July 2019.
- ↑ "NVIDIA TESLA A2 TENSOR CORE GPU".
- ↑ "NVIDIA TESLA A10 TENSOR CORE GPU".
- ↑ "NVIDIA TESLA A16 TENSOR CORE GPU".
- ↑ "NVIDIA TESLA A30 TENSOR CORE GPU".
- ↑ "NVIDIA TESLA A40 TENSOR CORE GPU".
- ↑ "NVIDIA TESLA A100 TENSOR CORE GPU". NVIDIA. Retrieved 14 January 2021.
- ↑ "NVIDIA Tesla A100 Tensor Core Product Brief" (PDF). www.nvidia.com. Retrieved 22 September 2020.
- ↑ Smith, Ryan (14 May 2020). "NVIDIA Ampere Unleashed: NVIDIA Announces New GPU Architecture, A100 GPU, and Accelerator". AnandTech.
- ↑ https://www.nvidia.com/en-us/data-center/h100/
- ↑ https://wccftech.com/nvidia-hopper-gh100-gpu-official-5nm-process-worlds-fastest-hpc-chip-80-billion-transistors-hbm3-memory/
- ↑ https://www.techpowerup.com/gpu-specs/h100-pcie.c3899
- ↑ https://www.nvidia.com/en-us/data-center/l40/
- ↑ https://www.techpowerup.com/gpu-specs/l40.c3959
- ↑ https://www.nvidia.com/en-us/data-center/l4/
- ↑ https://images.nvidia.com/aem-dam/Solutions/Data-Center/l4/nvidia-ada-gpu-architecture-whitepaper-v2.1.pdf
- ↑ https://investor.nvidia.com/news/press-release-details/2023/NVIDIA-and-Google-Cloud-Deliver-Powerful-New-Generative-AI-Platform-Built-on-the-New-L4-GPU-and-Vertex-AI/default.aspx
- ↑ https://www.techpowerup.com/gpu-specs/l4.c4091