Don’t learn to code: Nvidia’s founder Jensen Huang advises a different career path::Don’t learn to code advises Jensen Huang of Nvidia. Thanks to AI everybody will soon become a capable programmer simply using human language.

  • Sibbo@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    259
    arrow-down
    2
    ·
    9 months ago

    Founder of company which makes major revenue by selling GPUs for machine learning says machine learning is good.

    • Murvel@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      25
      ·
      9 months ago

      Yes but Nvidia relies heavily on programmers themselves. Without them Nvidia wouldn’t have a single product. The fact that he despite this makes these claims is worth taking note.

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        42
        arrow-down
        1
        ·
        9 months ago

        Lol. They’re at the top of the food chain. They can afford the best developers. They do not benefit from competition. As with all leading tech corporations, they are protectionist, and benefit more from stifling competition than from innovation.

        Also, more broadly the oligarchy don’t want the masses to understand programming because they don’t want them to fundamentally understand logic, and how information systems work, because civilization is an information system. It makes more sense when you realize Linux/FOSS is the socialism of computing, and anti-competitive closed source corporations like Nvidia (notorious for hindering Linux and FOSS) are the capitalist class of computing.

    • hitmyspot@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      25
      ·
      9 months ago

      It doesn’t make him wrong.

      Just like we can now uss LLM to create letters or emails with a tone, it’s not going to be a big leap to allow it to do similar with coding. It’s quite exciting, really. Lots of people have ideas for websites or apps but no technical knowledge to do it. AI may allow it, just like it allows non artists to create art.

      • TangledHyphae@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        ·
        edit-2
        9 months ago

        I use AI to write code for work every day. Many different models and services, including https://ollama.ai on my own hardware. It’s useful for a developer when they can take the code and refactor it to fit into large code-bases (after fixing its inevitable broken code here and there), but it is by no means anywhere close to actually successfully writing code all on its own. Eventually maybe, but nowhere near anytime soon.

        • Lmaydev@programming.dev
          link
          fedilink
          English
          arrow-up
          10
          ·
          9 months ago

          Agreed. I mainly use it for learning.

          Instead of googling and skimming a couple blogs / so posts, I now just ask the AI. It pulls the exact info I need and sources it all. And being able to ask follow up questions is great.

          It’s great for learning new languages and frameworks

          It’s also very good at writing unit tests.

          Also for recommending Frameworks/software for your use case.

          I don’t see it replacing developers, more reducing the number of developers needed. Like excel did for office workers.

          • TangledHyphae@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            9 months ago

            You just described all of my use cases. I need to get more comfortable with copilot and codeium style services again, I enjoyed them 6 months ago to some extent. Unfortunately current employer has to be federally compliant with government security protocols and I’m not allowed to ship any code in or out of some dev machines. In lieu of that, I still run LLMs on another machine acting, like you mentioned, as sort of my stackoverflow replacement. I can describe anything or ask anything I want, and immediately get extremely specific custom code examples.

            I really need to get codeium or copilot working again just to see if anything has changed in the models (I’m sure they have.)

        • hitmyspot@aussie.zone
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          9 months ago

          It can’t tell yet when the output is ridiculous or incorrect for non coding, but it will get there. Same for coding. It will continue to grow in complexity and ability.

          It will get there, eventually. I don’t think it will be writing complex code any time soon, but I can see it being aware of all the libraries and foss that a person cannot be across.

          I would foresee learning to code as similar to learning to do accounting manually. Yes, you’ll still need to understand it to be a coder, but for the average person that can’t code, it will do a good enough job, like we use accounting software now for taxes or budgets that would have been professionally done before. For complex stuff, it will be human done, or human reviewed, or professional coders giving more technical instructions for ai. For simple coding, like you might write a python script now, for some trivial task, ai will do it.

        • Jolan@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          I think this is going to age really badly and I don’t like LLMs but I think it will be soon. People also said that AI as we see it now is decades away but we got it quite quickly so I think it’s a very small step to go from writing fully grammatically correct English to fully correct code. It’s basically just a language the ai has to learn. But I guess what do I know. We’ll just have to wait and see

          • TangledHyphae@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            9 months ago

            I’ve been doing this for over a year now, started with GPT in 2022, and there have been massive leaps in quality and effectiveness. (Versions are sneaky, even GPT-4 has evolved many times over and over without people really knowing what’s happening behind the scenes.) The problem still remains the “context window.” Claude.ai is > 100k tokens now I think, but the context still limits an entire ‘session’ to only make so much code in that window. I’m still trying to push every model to its limits, but another big problem in the industry now is effectiveness via “perplexity” measurements given a context length.

            https://pbs.twimg.com/media/GHOz6ohXoAEJOom?format=png&name=small

            This plot shows that as the window grows in size, “directly proportional to the number of tokens in the code you insert into the window, combined with every token it generates at the same time” everything that it produces becomes less accurate and more perplexing overall.

            But you’re right overall, these things will continue to improve, but you still need an engineer to actually make the code function given a particular environment. I just don’t get the feeling we’ll see that within the next few years, but if that happens then every IT worker on earth is effectively useless, along with every desk job known to man as an LLM would be able to reason about how to automate any task in any language at that point.

      • MartianSands@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        15
        ·
        9 months ago

        It might not make him wrong, but he also happens to be wrong.

        You can’t compare AI art or literature to AI software, because the former are allowed to be vague or interpretive while the latter has to be precise and formally correct. AI can’t even reliably do art yet, it frequently requires several attempts or considerable support to get something which looks right, but in software “close” frequently isn’t useful at all. In fact, it can easily be close enough to look right at first glance while actually being catastopically wrong once you try to use it for real (see: every bug in any released piece of software ever)

        Even when AI gets good enough to reliably produce what it’s asked for first time & every time (which is a long way away for quite a while yet), a sufficiently precise description of what you want is exactly what programmers spend their lives writing. Code is a description of a program which another program (such as a compiler) can convert into instructions for the computer. If someone comes up with a very clever program which can fill in the gaps by using AI to interpret what it’s been given, then what they’ve created is just a new kind of programming language for a new kind of compiler

        • hitmyspot@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          9 months ago

          I don’t disagree with your point. I think that is where we are heading. How we interact with computers will change. We’re already moving away from keyboard typing and clicks, to gestures and voice or image recognition.

          We likely won’t even call it coding. Hey Google, I’ve downloaded all the episodes for the current season of Pimp My PC, can you rename the files by my naming convention and drop them into jellyfin. The AI will know to write a python script to do so. I expect it to be invisible to the user.

          So, yes, it is just a different instruction set. But that’s all computers are. Data in, data out.

      • variaatio@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        8
        ·
        9 months ago

        Well difference is you have to know coming to know did the AI produce what you actually wanted.

        Anyone can read the letter and know did the AI hallucinate or actually produce what you wanted.

        On code. It might produce code, that by first try does what you ask. However turns AI hallucinated a bug into the code for some edge or specialty case.

        Hallucinating is not a minor hiccup or minor bug, it is fundamental feature of LLMs. Since it isn’t actually smart. It is a stochastic requrgitator. It doesn’t know what you asked or understand what it is actually doing. It is matching prompt patterns to output. With enough training patterns to match one statistically usually ends up about there. However this is not quaranteed. Thus the main weakness of the system. More good training data makes it more likely it more often produces good results. However for example for business critical stuff, you aren’t interested did it get it about right the 99 other times. It 100% has to get it right, this one time. Since this code goes to a production business deployment.

        I guess one can code comprehensive enough verified testing pattern including all the edge cases and with thay verify the result. However now you have just shifted the job. Instead of programmer programming the programming, you have programmer programming the very very comprehensive testing routines. Which can’t be LLM done, since the whole point is the testing routines are there to check for the inherent unreliability of the LLM output.

        It’s a nice toy for someone wanting to make a quick and dirty test code (maybe) to do thing X. Then try to find out does this actually do what I asked or does it have unforeseen behavior. Since I don’t know what the behavior of the code is designed to be. Since I didn’t write the code. good for toying around and maybe for quick and dirty brainstorming. Not good enough for anything critical, that has to be guaranteed to work with promise of service contract and so on.

        So what the future real big job will be is not prompt engineers, but quality assurance and testing engineers who have to be around to guard against hallucinating LLM/ similar AIs. Prompts can be gotten from anyone, what is harder is finding out did the prompt actually produced what it was supposed to produce.

      • SlopppyEngineer@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 months ago

        Until somewhere things go wrong and the supplier tries the “but an AI wrote it” as a defense when the client sues them for not delivering what was agreed upon and gets struck down, leading to very expensive compensations that spook the entire industry.

        • hitmyspot@aussie.zone
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 months ago

          Aor Canada already tried that and lost. They had to refund the customer as the chatbot gave incorrect information.

          • BombOmOm@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 months ago

            Turns out the chatbot gave the correct information. Air Canada just didn’t realize they had legally enabled the AI to set company policy. :)