Social Security is in the pilot phase of developing DeDoop, a program that is supposed to remove duplicative medical records from the electronic files of Social Security disability claimants.
I think he meant the name is a joke. Or he meant it is a joke that rather than devoting greater funding to areas that would address the backlog (i.e. personnel) the Administration is investing in a new technology that at best, will remove an annoyance.
Sorting through hundreds of pages of duplicate records is not an annoyance, it wastes thousands of man hours every day across the agency. Private sector law firms and e-discovery companies have duplicate detection functionality, because sorting through hundreds or thousands of pages to figure out which ones are the same is a task better done by a computer in 20 seconds than paying a human to waste an hour or several hours.
I second that emotion. As a writer for 27 years, I can tell you that the most time-consuming part of decision-writing is reading and analyzing the medical evidence. If I have 1000 pages, and 500 are dupes - not an uncommon scenario - that means I save half of that time...which greatly reduces the overall time spent writing that case...which means I can get more cases out cumulatively...which ultimately results in backlog reduction. See how that works?
Annoyance on one hand, waste of thousands of man hours on the other. On the other...other hand, if you believe the Administration intends to remove the language requiring representatives be responsible for removal of duplicate evidence prior to submission, since the software would address the issue anyways, that sounds great! Then again, if evidence is accidentally removed and is not duplicate, that would be a massive problem.
Sometimes things look like duplicates but they aren't. Examples: * draft and final radiologist's reports (identical except one says draft and one says final) * true duplicates but in paginated sets (and the ALJ is going to be upset if you withhold pages 65-92 and 107 out of the 150-page document) * true duplicates that come from different sources (the hospital sends you a discharge statement and it's also in the primary care provider's records--it's important to be able to show that the PCP was aware of the hospitalization, especially now that treating sources are not given controlling weight and familiarity with the complete record is one of the ways to determine how much weight to give).
Sometimes things are duplicative but not exact duplicates. VA records are very repetitive but because of how the pages print out, there are no pages that are exact duplicates of each other.
Finally, where will deDoop put the deleted pages? Will claimants and reps have the ability to look at them somewhere on ERE to make sure they aren't inaccurately coded as duplicates, or will they just be deleted/put in the C section? Will they be in the record sent to federal court if the case gets there?
@3:53PM All great points and sadly none which the Agency is probably even thinking about. They don't care about getting things right, just getting things done efficiently (which they are far from doing at present)
My understanding is that the duplicate records are supposed to be removed from the file by support staff. I guess it is ok for folks to not do the jobs they are paid to do. I am sure taxpayers appreciate it.
@8:56 There are fewer people working to remove duplicates. Thanks to that horrible awful Obamacare more claimants have actually seen medical providers so the size of the file is growing.
If Sally Claimant says she became disabled on January 1, 2015, many responsible representatives send a records request to Martha Primary Care seeking records from January 1, 2014 (year prior) to the present (let's say June 2015 when Sally first walks into the office) and that produces 58 pages. After denial at initial, the rep will resend the request and now it produces 67 pages. After recon and filing of the request for hearing, the rep repeats the request and it produces 71 pages. After about 6-8 months, the rep repeats in again and gets 79 pages because the rep wants to see if an on the record might be appropriate. Finally the notice of hearing arrives the request is repeated and produces 90 pages.
There are now 275 pages of records that exist in the file in two or more records but at least we are more efficient and have fewer people tasked with working that file up.
Sometimes the records are submitted while the case is at the State Agency. There is no practical way to know exactly what is in the fillet the State Agency so the records are sent again.
Having access to the file at ODAR helps. And we will not deliberately resubmit what is already there.
But there is still a problem if an ALJ sees a fax that shows you are submitting pages 18-42 out of 97 you received and the Judge wants to know what happened to the others.
As to having to look at every page to write a decision, you only need to look to see that they are the same report. It is a pain, but should not take that long.
I do not understand why there is not an equivalent to ERE for the initial and reconsideration levels, or why ERE cannot be extended to these levels directly. I assume the difficulty is due to the divide between the state agencies and the Administration.
Why would this be a joke? This is necessary because your colleagues constantly submit hundreds of pages of duplicates.
ReplyDeleteI noticed this happens when your referring doctor gets records from specialists, etc. This just gives SSA a chance to ignore your records twice!
ReplyDeleteWho put the doop in the doop DeDoop?
ReplyDeleteNecessary for those time when you doop your drawers. It happens.
ReplyDelete@6:19
ReplyDeleteI think he meant the name is a joke. Or he meant it is a joke that rather than devoting greater funding to areas that would address the backlog (i.e. personnel) the Administration is investing in a new technology that at best, will remove an annoyance.
Sorting through hundreds of pages of duplicate records is not an annoyance, it wastes thousands of man hours every day across the agency. Private sector law firms and e-discovery companies have duplicate detection functionality, because sorting through hundreds or thousands of pages to figure out which ones are the same is a task better done by a computer in 20 seconds than paying a human to waste an hour or several hours.
ReplyDeletethey do? let's just buy it from them then
ReplyDelete@11:45
ReplyDeleteI second that emotion. As a writer for 27 years, I can tell you that the most time-consuming part of decision-writing is reading and analyzing the medical evidence. If I have 1000 pages, and 500 are dupes - not an uncommon scenario - that means I save half of that time...which greatly reduces the overall time spent writing that case...which means I can get more cases out cumulatively...which ultimately results in backlog reduction. See how that works?
@11:45
ReplyDeleteAnnoyance on one hand, waste of thousands of man hours on the other. On the other...other hand, if you believe the Administration intends to remove the language requiring representatives be responsible for removal of duplicate evidence prior to submission, since the software would address the issue anyways, that sounds great! Then again, if evidence is accidentally removed and is not duplicate, that would be a massive problem.
Sometimes things look like duplicates but they aren't. Examples:
ReplyDelete* draft and final radiologist's reports (identical except one says draft and one says final)
* true duplicates but in paginated sets (and the ALJ is going to be upset if you withhold pages 65-92 and 107 out of the 150-page document)
* true duplicates that come from different sources (the hospital sends you a discharge statement and it's also in the primary care provider's records--it's important to be able to show that the PCP was aware of the hospitalization, especially now that treating sources are not given controlling weight and familiarity with the complete record is one of the ways to determine how much weight to give).
Sometimes things are duplicative but not exact duplicates. VA records are very repetitive but because of how the pages print out, there are no pages that are exact duplicates of each other.
Finally, where will deDoop put the deleted pages? Will claimants and reps have the ability to look at them somewhere on ERE to make sure they aren't inaccurately coded as duplicates, or will they just be deleted/put in the C section? Will they be in the record sent to federal court if the case gets there?
@3:53PM All great points and sadly none which the Agency is probably even thinking about. They don't care about getting things right, just getting things done efficiently (which they are far from doing at present)
ReplyDeleteI guess I will be the one to get this out of the way. "Whoop deDoop."
ReplyDeleteOk carry on...
My understanding is that the duplicate records are supposed to be removed from the file by support staff. I guess it is ok for folks to not do the jobs they are paid to do. I am sure taxpayers appreciate it.
ReplyDeleteIf hundreds or thousands of pages are being submitted you can be sure there is no objective evidence of a listed disability to be found.
ReplyDelete@8:56, maybe your "understanding" is a little facile and without much actual familiarity?
ReplyDelete@8:56 There are fewer people working to remove duplicates. Thanks to that horrible awful Obamacare more claimants have actually seen medical providers so the size of the file is growing.
ReplyDeleteIf Sally Claimant says she became disabled on January 1, 2015, many responsible representatives send a records request to Martha Primary Care seeking records from January 1, 2014 (year prior) to the present (let's say June 2015 when Sally first walks into the office) and that produces 58 pages.
After denial at initial, the rep will resend the request and now it produces 67 pages. After recon and filing of the request for hearing, the rep repeats the request and it produces 71 pages.
After about 6-8 months, the rep repeats in again and gets 79 pages because the rep wants to see if an on the record might be appropriate. Finally the notice of hearing arrives the request is repeated and produces 90 pages.
There are now 275 pages of records that exist in the file in two or more records but at least we are more efficient and have fewer people tasked with working that file up.
Sometimes the records are submitted while the case is at the State Agency. There is no practical way to know exactly what is in the fillet the State Agency so the records are sent again.
ReplyDeleteHaving access to the file at ODAR helps. And we will not deliberately resubmit what is already there.
But there is still a problem if an ALJ sees a fax that shows you are submitting pages 18-42 out of 97 you received and the Judge wants to know what happened to the others.
As to having to look at every page to write a decision, you only need to look to see that they are the same report. It is a pain, but should not take that long.
@12:26
ReplyDeleteI do not understand why there is not an equivalent to ERE for the initial and reconsideration levels, or why ERE cannot be extended to these levels directly. I assume the difficulty is due to the divide between the state agencies and the Administration.
The decision writers droop
ReplyDeleteFrom reading too many dups
But never fear
A hero is hear
A program named DeDoop
They made reps submit them all
Which created this stormy squall
Now what is the cost
For efficiency lost
Will the taxpayers take the fall?
Love you 6:02.
ReplyDelete@10:52
ReplyDeleteIt still doesn't explain why support staff is not removing the duplicates. Perhaps human laziness (sad but true) is the real answer.