• 4 Posts
  • 1.9K Comments
Joined 2 years ago
cake
Cake day: September 7th, 2023

help-circle










  • The tweet before that:

    Let me tell you something about Akash. During a project at Berkeley, I accidentally deleted our entire codebase 2 days before the deadline. I panicked. Akash just stared at the screen, shrugged, and rewrote everything from scratch in one night—better than before.

    This says more about you, the scale of the project, bad organisation of your group, and the lack of challenge of Berkeley (nice namedrop though) group projects, and the failure of understanding the excersize (the goal is to learn how to work as a group and notice the networking problems), and the goals of being at a university (networking, partying and learning) than anything else.

    Hell I know of a project that also did this and they didnt manage to rewrite the project, as it actually took a lot of time.





  • Nostalgia has a lowkey reactionary impulse part(see also why those right wing reactionary gamer streamers who do ten hour reactive criticize a movie streams have their backgrounds filled with consumer nerd media toys (and almost never books)) and fear of change is also a part of conservatism. ‘Engineering minds’ who think they can solve things, and have a bit more rigid thinking also tend to be attracted to more extremist ideologies (which usually seems to have more rigid rules and lesser exceptions), which also leads back to the problem where people like this are bad at not realizing their minds are not typical (I can easily use a console so everyone else can and should). So it makes sense to me. Not sure if the ui thing is elitism or just a strong desire to create and patrol the borders of an ingroup. (But isnt that just what elitism is?)






  • But the Ratspace doesn’t just expect them to actually do things, but also self improve. Which is another step above just human level intelligence, it also means that self improvement is possible (and on the highest level of nuttyness, unbound), a thing we have not even seen if it is possible. And it certainly doesn’t seem to be, as the lengths between a newer better version of chatGPT seems to be increasing (an interface around it doesn’t count). So imho due to chatgpt/LLMs and the lack of fast improvements we have seen recently (some even say performance has decreased, so we are not even getting incremental innovations), means that the ‘could lead to AGI-foom’ possibility space has actually shrunk, as LLMs will not take us there. And everything including the kitchen sink has been thrown at the idea. To use some AI-weirdo lingo: With the decels not in play(*), why are the accels not delivering?

    *: And lets face it, on the fronts that matter, we have lost the battle so far.

    E: full disclosure I have not read Zitrons article, they are a bit long at times, look at it, you could read 1/4th of a SSC article in the same time.