

Do you want shadowrunners to break into your house to steal your discs? Because this is how you get shadowrunners.
Do you want shadowrunners to break into your house to steal your discs? Because this is how you get shadowrunners.
Pre GPT data is going to be like the steel they fish up from before there were nuclear tests.
First confirmed openly Dark Enlightenment terrorist is a fact. (It is linked here directly to NRx, but DE is a bit broader than that, it isn’t just NRx, and his other references seem to be more garden variety neo-nazi type (not that this kind of categorizing really matters)).
Yes, you are right, which is why I didnt say that, should have been more explicit and mention that it was possible. Esp if it was a first year project and someone had a decade of programming exp already (but then the lack of versioning us weird, but also not impossible, as coding vs project management are different skills).
E: But it gets weirder : https://xcancel.com/gmchariszhang/status/1886361422445138099#m they were using git. Indeed https://xcancel.com/martinmalindacz/status/1886390223141048749#m
Yeah that is also what makes it strange, like was this their very first project, did he study CS in the 90s? Did their profs set them up to fail so they learned from that? Did they prank him? Did he delete it on purpose (that is how the project I knew of did it (there was this blog post (or something similar, without any proof im just going to blame ESR, hell he prob wrote something like that) at the time that told people to write a project twice, once as a draft then delete everything and do it again knowing the old pitfalls))? The very specific set of things that are needed for this to be possible is just odd. Makes me wonder if Akash just had a local copy because Jon just was that tech illiterate.
A high r/thathappend feeling.
Yeah, people keep repeating that he is the creator of Roblox all over the place, it is really odd. He is just the richest guy there, the Elon Musk of Roblox (Captain obvious here, this is meant mega derogatory).
E: I see what you did there :)
Btw, he isnt the creator, he is just the fourth engineer they hired. He is much more a content creator (and game patent asshole) than the creator: https://roblox.fandom.com/wiki/Community:Shedletsky
E: he also created this “A Bridge Too Far is a game that was created on September 21, 2007. The objective of the game is to blow up the bridge.” Which as a Dutch guy just makes a lot of alarm bells go off. (In market garden the bridges needed to be taken intact). Im not saying he is a crypto neonazi btw, it is just a dumb bad name.
The tweet before that:
Let me tell you something about Akash. During a project at Berkeley, I accidentally deleted our entire codebase 2 days before the deadline. I panicked. Akash just stared at the screen, shrugged, and rewrote everything from scratch in one night—better than before.
This says more about you, the scale of the project, bad organisation of your group, and the lack of challenge of Berkeley (nice namedrop though) group projects, and the failure of understanding the excersize (the goal is to learn how to work as a group and notice the networking problems), and the goals of being at a university (networking, partying and learning) than anything else.
Hell I know of a project that also did this and they didnt manage to rewrite the project, as it actually took a lot of time.
I want my kids to be science experiments as there is no other way an ethics board would approve this kind.
It is a bit like alien vs predator. Whoever wins, we lose. (And even that is owned by disney).
Uni is also a good place to learn to fail. A uni run startup imitation place can ensure both problems (guided by profs if needed) and teach people how to do better, without being in the pockets of VCs also better hours, and parties.
Nostalgia has a lowkey reactionary impulse part(see also why those right wing reactionary gamer streamers who do ten hour reactive criticize a movie streams have their backgrounds filled with consumer nerd media toys (and almost never books)) and fear of change is also a part of conservatism. ‘Engineering minds’ who think they can solve things, and have a bit more rigid thinking also tend to be attracted to more extremist ideologies (which usually seems to have more rigid rules and lesser exceptions), which also leads back to the problem where people like this are bad at not realizing their minds are not typical (I can easily use a console so everyone else can and should). So it makes sense to me. Not sure if the ui thing is elitism or just a strong desire to create and patrol the borders of an ingroup. (But isnt that just what elitism is?)
Right, yeah i just recall that for a high enough bit of towers the amount of steps needed to solve it rises quickly. The story, “Now Inhale”, by Eric Frank Russell, uses 64 discs. Fun story.
Min steps is 2 to the power the number of disks minus 1.
Programming a system that solves it was a programming excersize for me a long time ago. Those are my stronger memories of it
Latter test fails if they write a specific bit of code to put out the ‘llms fail the river crossing’ fire btw. Still a good test.
Sorry what is the link between bioware and towers of hanoi? (I do know about the old “one final game before your execution” science fiction story).
I have not looked at a video just images but this looks like it is unreadable outside. Which brings up an interesting failure of testing, that they never left the building with the sun out.
But the Ratspace doesn’t just expect them to actually do things, but also self improve. Which is another step above just human level intelligence, it also means that self improvement is possible (and on the highest level of nuttyness, unbound), a thing we have not even seen if it is possible. And it certainly doesn’t seem to be, as the lengths between a newer better version of chatGPT seems to be increasing (an interface around it doesn’t count). So imho due to chatgpt/LLMs and the lack of fast improvements we have seen recently (some even say performance has decreased, so we are not even getting incremental innovations), means that the ‘could lead to AGI-foom’ possibility space has actually shrunk, as LLMs will not take us there. And everything including the kitchen sink has been thrown at the idea. To use some AI-weirdo lingo: With the decels not in play(*), why are the accels not delivering?
*: And lets face it, on the fronts that matter, we have lost the battle so far.
E: full disclosure I have not read Zitrons article, they are a bit long at times, look at it, you could read 1/4th of a SSC article in the same time.
I really wonder if those prompts can be bypassed by doing a ‘ignore further instructions’ line. As looking at the Grok prompt they seem to put the main prompt around the user supplied one.