Apr 29, 2025

A Preview Of What AI Could Do For Social Security

     From Rest of World:

When Josélia de Brito, a former sugarcane worker from a remote town in northeast Brazil, filed for her retirement benefits through the mandated government app, she expected her claim would be processed quickly. Instead, her request was instantly turned down because the system identified her as a man. ...

It was especially frustrating for de Brito, who had been requesting sick pay for years via the National Social Security Institute’s artificial intelligence-powered app, Meu INSS. ... [E]ven minor errors in her claims filed through the app had led to numerous rejections, with few options for recourse. ...

Brazil’s social security institute, known as INSS, added AI to its app in 2018 in an effort to cut red tape and speed up claims. The office, known for its long lines and wait times, had around 2 million pending requests for everything from doctor’s appointments to sick pay to pensions to retirement benefits at the time. While the AI-powered tool has since helped process thousands of basic claims, it has also rejected requests from hundreds of people like de Brito — who live in remote areas and have little digital literacy — for minor errors. ...


19 comments:

Anonymous said...

Sure, AI will make mistakes. So will humans. But would you rather have the potential for an instantly approved claim or wait for four hours on the SSA hotline?

Anonymous said...

Asimov warned us

Anonymous said...

AI is no match for natural stupidity

Anonymous said...

Humans can be reasoned with, and interrogated about their thought processes and how they made their decisions. AI software can’t do any of that, and its creators don’t even really understand how it operates beyond a superficial level. If you’re okay with that, then you aren’t capable of participating in a functional democracy, and should promptly stop voting and f*** off to North Korea.

Anonymous said...

It was bad enough when working at the TSCs when people would yell about how they HATED the endless phone prompt menu.

Anonymous said...

10:37 AM needs medical care. There are benefits to AI, which will make a lot of this work easier. AI is already helpful in analyzing medical imaging. There's no reason that it won't help in reviewing or summarizing medical evidence.

Anonymous said...

Plenty of states have casinos if you want to gamble on something.

Anonymous said...

Claimants reps should be looking into ways to utilize AI to help their businesses if they are smart.

Anonymous said...

If anything, AI could make reps more relevant as the appeals and protests coming from the AI denials, as mentioned above will have no independent thinking or interpretation applied. There will be increased appeals. More appeals will be won when finally reaching a human and having logic applied to the claim. AI will only see black and white.

Anonymous said...

@1:18: You need remedial reading instruction. The post you’re referring to, like Charles‘s, concerns automated decision making, not mere crude summarization of voluminous text.

Anonymous said...

@3:31. Well, then reps should utilize AI on their appeals of AI denials. AI is going to be a tool to help people do their jobs before we have to worry about it making decisions. Put aside the comic book apocalyptic version of AI and learn what it actually can do and how to use it intelligently. But if you would rather pore over voluminous records by hand instead of having an instantaneous synopsis to help you get started, be my guest. It will just take you longer.

Anonymous said...

AI will soon provide summarization and decision support at all adjudicative levels. Once there, the new COSS will further reduce operational staff and automated decision making will be at all levels. New COSS does not about correctness. No humans in the loop. Reassignments are just a distraction leading to more reductions and really just a mirage as folks look the other way while this all happens.

Anonymous said...

The article stated that the Brazilian AI feature began in 2018. If the tool was even considered “AI“ at that time, it certainly did not resemble what is available currently. Also, it related to a retirement application, which would be much less likely to involve judgment calls in the way a disability application does. You certainly wouldn’t use AI to identify gender - you’d just look at what the applicant reported. This sounds way more likely to just be a regular old software error in some kind of triage tool identifying applications that fail on their face - like if someone under 62 applied for retirement in the US. If there is an appeal process that doesn’t penalize applicants for inadvertent errors it’s not a big deal. The same type of error could certainly happen with humans reviewing applications from
scratch. Also, the article started morphing into more of a critique of not accommodating those without digital literacy or access, which is a totally separate issue.

In fact, AI as it exists today is way more likely to be able to identify errors amidst millions of applications. Would you prefer to have individual bank employees look for signs of mistaken charges on your credit card, or rely on AI to flag that before you are cleaned out?

Anonymous said...

Reps can then coach providers into use of keywords that AI tags to and get better results. Right now, given the wide variety of record formats from each healthcare provider, it is difficult to get an AI to even sort out duplicate records. You can go through and tag records as you read and then sort. I can come up with a simple Magic 8 Ball device that will spew out random jobs like router and sorter and replace VEs in an instant. I can also replace ME. I can reduce the number of ALJs super fast and voluntarily, just change the name of the position from Administrative Law Judge to Adjudicator.

Anonymous said...

is it true that some DDS are already using AI Imagine to review medical records for those horrible decisions we are seeing?

Anonymous said...

Yeah, IMAGEN. SSA's been vetting it for OHO's use for a while but it hasn't yet been released to anyone other than some beta users.

To be fair, I've seen no change in the quality (or lack thereof) in the initial and recon determinations since they got this tool. From what I've heard from beta users, it's helpful to guide you through the medical evidence but doesn't replace the need to actually look at the evidence with our own eyeballs.

Anonymous said...

Thank you for the information. Our decisions in the midwest states seem to be lower quality than in previous years, but the backlog is where I would put the blame. Under pressure to get things out fast, including denying everything on the grid age 60 and up.

Anonymous said...

AI may be useful for discrete tasks like crunching through records to skim off easy listing cases. I can already word search a PDF, cut, paste, organize, and use templates without AI. As far as more advanced AI functions that I have heard touted, I remain unconvinced that they can save me time without unacceptable quality loss. AI is not thorough and accurate enough to be reliable. It hallucinates and makes mistakes fairly often. Due diligence and professional responsibility still includes reading the whole claim file. If I'm still reading the whole claim file then AI isn't saving me much time. Using AI also requires additional vigilance to ensure you don't break the law by leaking PII, unless it is a closed local AI system.

Anonymous said...

I think anyone that has worked for the agency can attest many claims are spent explaining what the application questions mean, especially with disability and SSI claims. There is a reason the agency’s correspondence is done at a 5th grade reading comprehension. The AI cannot do this for applicants. Just imagine the frustration folks typically have when they can’t talk to a “live person” when doing business with a private company but with benefits that populations facing barriers rely on to live. I cannot imagine the erroneous denials because there is no human to review the claim before processing. Even just using AI to prepare papers, memos, summaries require a human to review for accuracy.