' Data Privilege and AI in Lawyering: Ethical Concerns | MTLR

Data Privilege and AI in Lawyering: Ethical Concerns

The American Bar Association provides confidentiality rules to ensure that lawyers maintain professionalism and trust with their clients. Particularly, Rule 1.6 of the Model Rules of Professional Conduct expressly requires that “[a] lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent.” Additionally, Rule 1.1 requires that a lawyer provide competent representation to clients, which “requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” As Artificial Intelligence (AI) instruments like ChatGPT and Microsoft CoPilot continue to rise in popularity, the ethical concerns surrounding these platforms grow as well.

ChatGPT emerged in 2022 as an AI program that “uses machine learning algorithms to process and analyze large amounts of data to generate responses to user inquiries.” It can respond in seconds with information that could take hours for a human to discover. ChatGPT does all this while answering questions in a conversational manner that mimics human communication.

Beyond research, practicing lawyers can use ChatGPT or similar AI programs to draft memos, contracts, and assist in daily tasks for clients. With the increased use of this new technology comes various concerns about data security and client privacy, both of which are ethical considerations. The immediate concern amongst the legal community is the ethics of using AI programs to answer client-specific legal questions and create documents for fear the database will subject clients to leakage of their personal data and information. Additionally, lawyers risk relying on biased information and non-existent case law created by AI.

Since these programs store user data, there is a risk that confidential client information may become compromised. The question that follows is: How should the American Bar Association (ABA) regulate lawyers in order to ensure the protection of client information when using AI programming in their work?

Experts and scholars are in disagreement about the potential solutions to this potentially dangerous confidentiality breach. On one side of the argument, some argue that the ABA should entirely prohibit self-learning AI programs. Others advocate for increased use of informed consent forms from clients. Alternatively, some argue for the use of only “in-house” AI programs. 

First, the American Bar Association can prohibit self-learning AI programming entirely. Self-learning AI programs store user information in order to produce future results. Thus, in a legal context, these programs retain and encode client information into their systems. Alternatively, non-self-learning AI programs get information based on training data that are already within their systems, meaning that the program itself does not retain any data from previous user interactions, and thus, client information is not stored. These programs may help clients and attorneys alike feel more at ease, knowing that their data is protected.

Second, a less aggressive answer would be to require that lawyers obtain client informed consent before using AI programming, including any disclosure of personal or confidential information. Informed consent is a prevalent topic in legal ethics, as it describes how lawyers can take particular courses of action after their client has been informed of the material risks and alternatives. In the context of AI, informed consent can help clients understand the time and money that can be saved by using these platforms while also weighing the potential risks regarding confidentiality and data protection. Although this option can provide transparency for clients, it does nothing to mitigate the risk of data leakage; rather, it simply informs the client of the possibility of such a leakage. 

Lastly, the ABA could regulate the use of AI programming to promote ethicality in lawyering by requiring law firms to use generative AI programming that is “in-house.” “In-house” AI generative systems are created specifically for a particular law firm, which would result in the firm having its own coded AI system. Using “in-house” AI generative systems would ensure that the privileged client data remains within the firm’s grasp. This not only provides enhanced data protections but can also lead to more customized search results and tailored outputs. We have already seen interest in this course of action, as Dentons announced a proprietary generative AI platform called “fleetAI” in 2023. 

While this is a plausible solution to confidentiality concerns, it would be vastly expensive and time consuming to create. This poses an issue for smaller firms, specifically firms that may not have the capacity to dedicate time and resources to creating their own AI programs, which is an ethical issue in itself if this becomes an ABA requirement.

In conclusion, the integration of AI systems into daily life provides ethical concerns that may become prevalent with the increased use of generative AI in everyday lawyering. This will likely force the ABA to implement some form of prohibition or requirement to use these systems in the near future, which is, therefore, something that new and seasoned attorneys alike should pay attention to.

Isadora Dimovski and Jessica Korn are Associate Editors on the Michigan Technology Law Review.

Submit a Comment

Your email address will not be published. Required fields are marked *