If I find myself repeating more than twice, I just ask “Can this be a function”. If yes, I move it there. If not, I just leave it as it is.
Life’s too short to spend all day rewriting your code.
If I find myself repeating more than twice, I just ask “Can this be a function”. If yes, I move it there. If not, I just leave it as it is.
Life’s too short to spend all day rewriting your code.
Yes, but also I would hope that if you have the autonomy to install linux you also have the autonomy to look up an unknown command before running it with superuser privileges.
If you want a pretty cool example, Le morte d’Arthur was written in prison.
They’re definitely among the worst of the worst. It’s always surprised me how comparatively sterile their wiki page is. Feels like they’ve got someone cleaning it up.
w++ is a programming language now 🤡
Lazy is right. Spending fifty hours to automate a task that doesn’t take even five minutes is commonplace.
It takes laziness to new, artful heights.
That’s only the first stage. Once you get tired enough you start writing code that not even you can understand the next morning, but which you’re loathe to change because “it just works”.
If you want to disabuse yourself of the notion that AI is close to replacing programmers for anything but the most mundane and trivial tasks, try to have GPT 4 generate a novel implementation of moderate complexity and watch it import mystery libraries that do exactly what you want the code to do, but that don’t actually exist.
Yeah, you can do a lot without writing a single line of code. You can certainly interact with the models because others who can have already done the leg work. But someone still has to do it.
It really is big. From baby’s first prompting on big corpo model learning how tokens work, to setting up your own environment to run models locally (Because hey, not everyone knows how to use git), to soft prompting, to training your own weights.
Nobody is realistically writing fundamental models unless they work with Google or whatever though.
I read it a long time ago. The format is interesting, novel certainly. I suppose it’s the selling point, over the prose.
To me it seemed like there were many competing “ways” to read it as well. Like a maze, you can go different paths. Do you read it front to back? Niggle through the citations? Thread back through the holes? It’s not often you get a book that has this much re-read value.
The assertion that they cannot be cheap is funny, when Vicuna 13B was trained on all of $300.
Not $300,000. $300. And that gets you a model that’s almost parity with ChatGPT.
It may be an opinion, but pointing it out won’t make me like java any more.