PROTECT YOUR DNA WITH QUANTUM TECHNOLOGY
Orgo-Life the new way to the future Advertising by AdpathwayIn July the Trump Administration unveiled an AI Action Plan that seeks to accelerate AI innovation nationwide. The document called the healthcare sector especially slow to adopt AI “due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards.”
Healthcare Innovation recently spoke with Keith Roberts, a litigation attorney on regulatory matters with New Jersey-based law firm Brach Eichler, about the plan.
Healthcare Innovation: The Trump administration’s AI Action Plan says it’s looking to eliminate federal rules or regulations that hinder AI innovation and adoption, but it didn't say what those were. Are there some obvious federal rules or regulations that might hinder AI adoption?
Roberts: The short answer is no, but I can give some insight about the larger picture. Not only is the technology evolving, but the ethical, regulatory and legal concerns behind the technology are evolving rapidly, too. I have lectured to multiple physician groups and a large ambulatory surgical group recently on this topic. In the delivery of healthcare, there are strong concerns about the ethical and practical considerations.
Only two state legislatures have aggressively regulated the use and development of AI — Colorado and California. That’s going to spawn litigation in terms of how it's implemented. Here in New Jersey, we have three primary areas where AI is being addressed from a legal perspective. There is a bill that is going to address the denial of care from commercial payers, and whether or not AI can be used to assist in reimbursement for care. Another bill in the AI space involves the use of chatbots in counseling and psychotherapy spaces. Also, the state Attorney General has given guidance about using it in the workplace to make workplace determinations because of its inherent biases.
That brings us full circle to where the Trump administration has laid out this very aggressive approach to essentially removing whatever hindrances it perceives to the development and implementation of AI in all industries, healthcare being just one of them.
When you look at the World Health Organization's position, and other positions that have been taken by responsible people in research, you have to be careful about implementing broadly and accelerating the use of AI in healthcare because you're dealing with a large amount of data sets that have been developed with inherent biases, which is problematic. The technology itself hasn't been fully developed enough for scientists practicing in the medical field or giving advice to practitioners to be comfortable that their ethical concerns have been met with decisions that they're going to make about the care of a human being based upon a data set that is in a black box, right?
HCI: There are probably also questions about transparency and how much you're telling the patients that AI is guiding a decision that is being made about them.
Roberts: That's an excellent point. There'll be regulations and/or laws around transparency. But first we're trying to get off the starting blocks with vetting the use of the technology itself. But there is some tension with the position of the administration, which has taken the position that certain fields, such as healthcare, have been slow to develop the technology because of a lack of understanding. That’s not true. It's not a lack of understanding, It's quite the opposite. I think there is an understanding of the complexity.
HCI: The AI Action Plan states that federal agencies that have AI-related discretionary funding programs should consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award. Where's that going to lead?
Roberts: I think in the coming months, we'll see more clarification of the administration's position, once the FCC and other regulatory bodies and governmental entities have the opportunity to engage in their fact-finding and due process and diligence around these issues, I think we're going to see what the tensions really are. But there are some silver linings here. They're proposing centers of excellence and sandboxes, right? That's good because regardless of what your position is on how fast and far health systems are moving into the space, if you participate in a center of excellence, you're essentially privatizing the development of the technology in a safe zone. So you’ll get to develop and use and vet that technology and develop it with governmental support.
HCI: This year I have heard several people in the health IT space make the case for having a new federal law supersede state-based healthcare privacy regulations because health systems and health IT vendors find it so difficult to cope with the patchwork of state laws.
I have never heard that gain much traction before, but I was reminded of it with this AI Action Plan that seems to threaten to punish states that are putting in place more AI protections for patients.
Roberts: I agree with you, and I think it's a good parallel that you've recognized there. I don't think it's going to work. I don't think it'll get traction to the extent that it'll get passed by both houses and signed by the president. Some states are going to have privacy laws that are going to scrutinize and restrict uses of technology more than other states. I don't think that there'll be one federal standard that will be applied universally.
Keep in mind that this administration tends to start with a position that's five steps ahead of where they really want to be, right? I think it's a recognized tactic of this administration. They really want to get to level six, so they start at level 10 and then back up. There's just as much confusion around this policy as there is clarity. No one knows how these centers of excellence are going to be rolled out. No one knows what the FCC is going to say. We will know a lot more about what this going to look like at the end of the year.
HCI: Have there already been some cases where health systems have been sued for the way they deployed AI, or has that not happened yet?
Roberts: I haven't seen any national level cases being litigated that I consider to be reliable test cases on the subject, but it's going to come. That’s a difficult case to develop because of the nature of AI itself. You would have to identify your initial defendant from a liability standpoint, right? Is it the single provider? Is it the employer who directed the provider to use it? Is it the system that purchased the database?
I think leaders in AI use in healthcare right now have been in cardiology, oncology, and radiology — these are the areas that are really booming. Radiology is being revolutionized by AI, but if a large data set has been created or populated with a specific type of demographic or in a specific geographic area, that could lead to inherent biases in the development of the software. That is an issue that radiology is facing right now.
HCI: Do health system execs or physician practices come to your firm for advice on how they should do AI governance to protect themselves from a liability standpoint, as they're putting these things in place?
Roberts: I counsel health systems and large medical groups. I'm in the process of developing an AI transparency policy for ambulatory surgery centers.
Now health systems are looking more toward their compliance vendors, in conjunction with legal, but this is an area that is in its infancy, and quite honestly, I think compliance officers are going to be looking to legal, and legal is going to be pulled in as these regulations come to be. Here, we haven't had major action by the New Jersey Board of Medical Examiners or by the legislature as of yet. We've had discussions and we've had things introduced. We’ve had guidance from the Attorney General.
HCI: So are all those bodies you just mentioned likely to come out with more specific requirements or regulations in the next year or so?
Roberts: I hate to say it, but it's probably going to be the product of some mistake or error or something that comes to light that is not positive.

.jpg)










English (US) ·