Here's a piece by four researchers touting Social Security's progress in Artificial Intelligence. It's somewhat odd. Apparently, it hasn't been published unless you consider releasing it online as publication. It has no date on it. I don't see any source cited from after 2021 so this may be a few years old. I may not the one who should say this but I'm not sure if what they're calling "Artificial Intelligence" would generally be called "Artificial Intelligence" today.
Hmmm. Might be interesting if they ran through AI all cases in the last 25 years or so, assuming this was possible, and compared approval rates AI would have made vs. actual approvals and denials. Would the rates be the same, but the actual approval agreement would be poor? Let's say 50% of cases are ultimately approved by the current process and by AI. However, what if AI approves a different set than the current process by 50%. So, 25% would be approved by both, 25% denied by both, 25% denied by current but approved by AI, and 25% approved by current but denied by AI. So, if this were to happen, what CONCLUSIONS could and would be made?
ReplyDeleteLooks like it's a chapter within the book "The Oxford Handbook of AI Governance". It appears that the book is still in development, but chapters are published online as they are completed. Here is the link for the book - https://academic.oup.com/edited-volume/41989
ReplyDeleteAlso, the chapter was published to SSRN on November 28th, 2021 and was written on August 18th, 2021 (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3935950). So it predates the initial launch of ChatGPT by over a year, and the corresponding explosion of interest it produced around Generative AI...which unfortunately a vast majority of the public now associates all of "AI" with.
But Artificial Intelligence is a much broader field than just the natural language chatbots and image generation tools that have captured everyone's attention, and the case classification and text extraction applications detailed in the chapter still firmly fit within what would be considered as "modern" AI applications.
Correct, AI is a much, much older concept, actually dating back to the earliest days of programming language, especially Lisp. I remember programming some AI routines using Prolog in my Computer Science curriculum in the late 1980s. Several industries, such as geology have been using AI software since the early 1980s.
ReplyDeleteWhat is 'newer' is combining very fast computing resources with advanced natural language parsing, and very large datasets, giving us Large Language Models (LLM). To laymen, this may seem like artificial intelligence, but in the strict Computer Science definition, it's not. Writing software programs that can modify themselves and learn from themselves is--and we have been doing that for the past half century.