An AI Chatbot will answer your client’s questions about Medicaid eligibility criteria. And it will do so sounding confident, providing exact income and asset limits, and maybe levels of functional criteria. However, those answers will only be right if your clients know exactly the right questions to ask, and chances are good they don’t.
Eldercare Resource Planning (ECRP) has tested various AI Chatbots, such as Chat GPT and Google’s Bard, and the results have been poor. As industry professionals, ECRP knows how clients ask questions about Medicaid eligibility criteria when they are trying to make their long term care plans. The clients don’t know much about Medicaid, and they don’t know what they don’t know, so their questions are vague. When ECRP fed those kind of vague questions into AI Chatbots, it got vague, and often incorrect answers, in return.
“AI may give you the right answers only if you ask the right questions,” ECRP AI researcher Daniel Gheorge said.
For example, most clients who are trying to figure out eligibility criteria for themselves or a family member start the process by asking a broad question like, “What are the Medicaid eligibility requirements?” Many of them don’t know there are different types of Medicaid for children, families, pregnant women, and the elderly. And if they don’t know that, they can’t ask AI the right questions.
Maybe a client knows enough to ask about Medicaid long term care for the elderly, but do they know they can get that care through three types of Medicaid programs? Do they realize they can receive long term care at home, even if they need a Nursing Facility Level of Care? Would the client know enough to specify which state they live in, since Medicaid rules vary by state? Or would they know enough to specify if they are single or married? And if they’re married, is their spouse also applying, or already enrolled? All of these factors make a major impact on Medicaid eligibility criteria, but an AI Chatbot would only include them in its calculations if the user knew enough to input them in the first place.
A Guided Experience
In stark contrast to AI Chatbots, this Medicaid Eligibility Requirements Finder (MERF) tool asks the user a few simple questions and then delivers an accurate and detailed answer. A person using MERF doesn’t need to know that Medicaid rules vary by state, because the first question MERF asks is, “In what state will the individual be applying for Medicaid?” Next, MERF gives the user brief descriptions of the three Medicaid Long Term Care programs and asks which one they are applying for – Nursing Home, Home and Community Based Service Waivers, or Aged Blind and Disabled Medicaid. Finally, MERF asks if the user is married or single, and if married, is their spouse also applying?
After the user answers those three (or four if married) simple questions, MERF provides them with a comprehensive look at their eligibility criteria. It gives the applicant’s exact income and asset limits, and some details on their spouse’s income and asset limits, if applicable. There’s an explanation about how the applicant’s house might, or might not be, counted against that asset limit. The functional (aka medical) level of care requirements are explained and broken down into categories – physical functional ability, health issues/medical needs, cognitive impairment, behavioral problems. Plus, there’s some guidance on what an applicant might do to become eligible if they don’t meet these requirements.
That last bit of advice MERF provides is not within the scope of current AI Chatbots. Even when those Chatbots come up with the right answer, they won’t be able to come up with a solution if that answer doesn’t solve the client’s problem. Yes, AI might be able to tell a user the asset limit for Nursing Home Medicaid in Virginia for a single applicant in 2023 is $2,000. But if the client is over that limit, the AI Chatbot won’t explain there are ways to spend down the extra assets and become eligible. And a Chatbot won’t ask to look over the client’s assets to see if any of them might actually be exempt.
Not Enough Accurate Data
To ask the right nuanced questions, AI needs to process and learn from mountains of data. For niche subjects with a high degree of variance like Medicaid eligibility criteria, there just isn’t enough data. Furthermore, the data that does exist is structured inconsistently, comes in many formats, and is constantly changing. The information might be published in word documents, spreadsheets, or a PDF. It could be in HTML code or plain text. Some states still publish information by scanning documents. Plus, states update their Medicaid rules and limits every year, but not all at the same time. Some states update on Jan. 1, others in April, and still others in July. All told, there are updates in six months of the year when it comes to state Medicaid standards.
Understanding the Asker
For now, understanding the nuances of a complex topic like Medicaid eligibility criteria is beyond the range of AI. And understanding the nuances of a face-to-face human interaction is well beyond AI’s range.
“AI does not know the level of sophistication of the user,” Gheorge said, “and therefore cannot nuance its responses to the user.”
Humans gauge each other’s level of sophistication instinctively. When we’re speaking to another person, especially if we’re explaining something as complex as Medicaid eligibility criteria, we would pay careful attention to the listener. Are they following us, or do they seem confused? If they look puzzled, we would naturally slow down, choose different words, and elaborate on especially difficult sub-topics. If a user feels confused after interacting with an AI Chatbot, all they get is a blinking cursor.
So, after doing its research, and knowing what it does about Medicaid applicants, ECRP urges you to caution your clients against using AI Chatbots to determine their Medicaid eligibility criteria. Perhaps AI will prove helpful in this area in the future, but that time is still years away.