Fanaticus
  • Communities
  • Create Post
  • Create Community
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
threelonmusketeers@sh.itjust.works to [Dormant] moved to !teslamotors@lemmy.zip@lemmy.worldEnglish · 2 years ago

Elon Musk asked Nvidia to prioritize GPU shipments to X over Tesla, emails reveal

electrek.co

external-link
message-square
7
link
fedilink
38
external-link

Elon Musk asked Nvidia to prioritize GPU shipments to X over Tesla, emails reveal

electrek.co

threelonmusketeers@sh.itjust.works to [Dormant] moved to !teslamotors@lemmy.zip@lemmy.worldEnglish · 2 years ago
message-square
7
link
fedilink
Emails circulating at Nvidia show that Elon Musk asked them to prioritize processor shipments to X over Tesla. Amid concerns...
alert-triangle
You must log in or # to comment.
  • breakingcups@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 years ago

    I’m sure his Tesla shareholders will understand.

    • lesbian_seagull@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 years ago

      deleted by creator

  • uebquauntbez@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 years ago

    Another card removed from this house of cards? We’ll see. megetspopcorn

  • ichbinjasokreativ@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    2 years ago

    Tesla has a decent relationship with AMD though, right? Means nvidia in nice-to-have for them, but not neccessary.

    • Endmaker@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 years ago

      How are AMD GPUs useful though? Last time I’ve heard, CUDA (and CuDNN) is still an Nvidia-only thing.

      • ichbinjasokreativ@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 years ago

        There are compatibility layers for cuda to run on AMD, and everything AI can also natively run on ROCm. It’s a choice to use nvidia, not mandatory.

        • Endmaker@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 years ago

          Oh wow. TIL

        • notfromhere@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 years ago

          What is the best working compatibility layer to run cuda on AMD? ROCm seems to drop support pretty quickly after release though so it’s hard for it to get a foothold. As Karparhy has shown, doing low level C++ has some amazing results…

[Dormant] moved to !teslamotors@lemmy.zip@lemmy.world

tesla@lemmy.world

Subscribe from Remote Instance

You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !tesla@lemmy.world
lock
Community locked: only moderators can create posts. You can still comment on posts.

This community has moved to:

!teslamotors@lemmy.zip

Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 1 user / day
  • 1 user / week
  • 1 user / month
  • 1 user / 6 months
  • 1 local subscriber
  • 1.81K subscribers
  • 422 Posts
  • 719 Comments
  • Modlog
  • mods:
  • whoopThereItIs@lemmy.world
  • SpaceBar@lemmy.world
  • threelonmusketeers@lemmy.world
  • BE: 0.19.15
  • Modlog
  • Instances
  • Docs
  • Code
  • join-lemmy.org