![](https://mbin.grits.dev/media/8a/4c/8a4cd03b01da0ea7fa37eb0fa5c51e295a9dfb39be2df5d03c38b23a57e3873a.png)
![](https://lemmy.world/pictrs/image/8f2046ae-5d2e-495f-b467-f7b14ccb4152.png)
Apparently the only way the candidates will agree to do it is if the format is so stilted that there’s no chance of anyone learning anything or seeing the candidates get challenged on anything. It’s basically just a taking-in-turns version of a campaign commercial.
What, indeed, is the point. Like a lot of American politics, the whole “debate” survives as a pointless vestige of a thing (now long forgotten) that was useful and productive in its original form, but now is mutated to a useless and unrecognizable monstrosity, which you have to pretend is super serious and important if you want to be able to be on TV.
Yeah. It is fairly weird to me that it’s such a common thing to do to take the raw output of the LLM and send that to the user, and to try use fine-tuning to get that raw output to look some way that you want.
To me it is obvious that something like having the LLM emit a little JSON block which includes some field which covers “how sure are you that this is actually true” or something, is more flexible and simpler and cheaper and works better.
But what do I know