When you watch Star Wars, you need to understand that the Galactic Empire is about the United States empire.
oh wow! cool armor uniforms! and I get a blaster! pew pew pew.
/s
More people need to watch this, too frequently have I seen people try to misframe the Empire in Star Wars as XYZ enemy of the US and the US as the Rebels, when it has always been against the US as allegory.
I regularly say that joining the military for me was like becoming a Storm Trooper. At least most people seem to understand that’s not a good thing.
Everytime I read stuff like this, I always remember that slide that says “A computer can never be held accountable therefore a computer must never make a management decision.”
Great video essay by Angela Collier: AI does not exist but it will ruin everything anyway
That implies management is held accountable
It was last week.
Rare W post from .ml
Damn, I need to rewatch that show.
Season two lands next year!
Does anyone have any good article on them using AI?
The more I read about this story, the better it gets.
This ProPublica article is good reading. It discusses a company used by many insurers, including UHC, to deny claims using AI. The name of the company is EviCore. I suppose the “Evi” is supposed to be short for “evidence” but I think it is pretty clear that it’s just short for “evil.”
AI should be used as a recommendation, not an absolute answer.
I think we need to make laws pertaining to the use and usage of the term by businesses. There is nothing intelligent about language models. Most of what AI is being used for in businesses is more “Automated Instructions” than anything intelligent.
Laws need to dictate that companies MUST have reasonable ability to get to a human representative and that they are legally responsible for their responses.
It’s fine to set up automated systems to assist people within companies, as the majority of issues people have can be solved through automated processes.
User: “I need access to this network share”
LLM: Okay submit this form: Link to network share access request form.
LLM: Can I further assist?
User submits form specifying what the network path location, radio buttons for read/ read, write permissions, and reason for needing access.
Form sends approve/deny button to owner of that specific network share in an email.
Approver clicks approve, and the user is added to the active directory group required, and receives an email back stating they have been added and they should log out and log back in so their active directory groups update group policies.
Time taken by users: 5 minutes Many companies have so many requests coming in that stuff like this often doesn’t get to the approving parties and completed for weeks.
But if you set up an LLM inside your company non external facing that locates forms and processes but cannot access user data or permissions it can take the workload of managing 60,000 users down by a significant amount.
(I’m sure there are a million other uses that could be legitimate, but that’s just a quick one off the top of my head)
annoy
🤔
Defense against the dark arts lesson one:
Unpopular opinion: It’s OK to use AI to fight fraud as long as your data is good, your precision threshold is very high, and appeals are easy. It seems like it is almost never used in this way when people try to save money, sadly.
Current AI is incapable of providing that level of good data and high precision, it is uncertain if the types of AI being developed now are even capable of ever achieving that without fundamentally changing how they work.
And his AI is said to have 90% error rate which denies valid claims 90% of the time…
Define AI. Then you’ll see that it has been used to fight fraud for decades.
i work in Management at an insurance firm and thats exactly what we do (use AI for fraud prevention). we have no interest in denying rightful coverage because in the longrun it can cost you more than just paying them outright (lawyer costs, interventions, bad PR, etc…) if you dont work in the industry, you have NO idea how many people try to cheat. its ridiculous.