Showing posts with label Artificial Intelligence Discrimination. Show all posts
Showing posts with label Artificial Intelligence Discrimination. Show all posts

Thursday, July 10, 2025

The AI Audit That Wasn’t: How Grok Became a Legal Liability

Elon Musk’s AI company Grok, was caught surfacing antisemitic and racist responses on X (formerly Twitter).

When AI systems are deployed without content filters, human review, or ethical auditing, they can do more than make mistakes in their output. They can create liability under anti-discrimination laws.

Why AI Needs Oversight

Generative AI tools trained on the open web can replicate bias and hate unless carefully monitored. That is why Andrew Lieb, Managing Attorney at Lieb at Law, published the 10-Step Bias Elimination Audit in the New York Law Journal. It provides a compliance roadmap for companies deploying AI tools.

  • Audit datasets for bias and disparate impact
  • Implement real-time monitoring of outputs
  • Involve multidisciplinary teams in reviews
  • Document all mitigation efforts for accountability

AI and Legal Risk

AI discrimination is not theoretical. It is actionable under federal and state laws, including Title VII, the ADA, the Fair Housing Act, and New York Executive Law. There are even local laws such as the New York City Human Rights Law that create claims. If an algorithm makes a discriminatory decision, the company using it can be sued for discrimination where they can owe back pay, front pay, emotional distress damages, punitive damages, and attorneys' fees / expert fees while also being ordered to change their practices, train on anti-discrimination, and more.

At Lieb at Law, we provide both defense and prevention. Our team litigates AI-related claims and performs compliance audits to help businesses avoid them.

Explore Our AI Compliance and Litigation Services

Lieb at Law helps companies navigate the legal risks of artificial intelligence and machine learning. We defend discrimination claims, perform algorithmic audits, and deliver CLE training on AI legal exposure.

Learn More


Attorney Advertising: This blog post is for informational purposes only and does not constitute legal advice. Prior results do not guarantee a similar outcome.

Monday, December 09, 2024

Avoiding Discrimination in AI: CLE from Lieb at Law's Claudia Cannam

As artificial intelligence continues to transform industries, it also presents unique legal challenges, particularly in avoiding discriminatory practices embedded in AI systems. To help attorneys navigate these complexities, Lieb at Law Associate Claudia Cannam recently taught a 1-credit CLE course through Quimbee, titled “Avoiding Discrimination in AI.”


Avoiding Discrimination in AI: In order to navigate some of the legal challenges that come with new tech, you must understand the hidden biases in artificial intelligence systems and their legal impact. This course will dive into how AI discrimination occurs and its real-world consequences. We will review practical advice and strategies for avoiding discrimination when using AI.

Register Now: Attorneys can register for Claudia Cannam's Avoiding Discrimination in AI CLE course here.



Wednesday, November 24, 2021

Artificial Intelligence Decides if You're Hired! Is It Discriminatory?

Wonder why you were denied the last job or promotion you applied for? 


Wonder no more, because there is a good chance that it wasn't a human's decision. In fact, Artificial Intelligence "AI" has become the judge on who is hired or who is promoted for most employers and employment agencies. However, AI isn't perfect and may be infringing on your anti-discrimination rights if it's not properly programmed and regularly audited. 


That is why AI or Automated Employment Decision Tool "AEDT" has been the target of much scrutiny. Experts point out that AEDT are prone to bias in their hiring and promotion process. Biases include racial, sexual, and ethnic discrimination, amongst so many other protected categories. This problem has become so worrisome that New York City is putting in place an amendment to the New York City Administrative Code to curb the use of AI in hiring. 


Such amendment was approved by the New York City Counsel on November 10th, 2021. It can be read here.  The purpose of the Bill is to require employers and employment agencies to assess employees and candidates without the use of machine learned biases. The effects of such machine learned biases are discriminatory in nature.

Now, the Bill is on the Mayor's desk and goes into effect on January 1, 2023.


The Bill is limited to regulating AI decisions that screen candidates for employment or screen employees for promotion. This limitation is not without exception. An AEDT is allowed if the tool has undergone an independent bias audit no more than one year prior to it use. The audit's summary then must become publicly available on the employers' or employment agencies' website.


But how will you know if the employer or employment agency is using AEDT on you? The law enforces notification guidelines that will inform employees and candidates of its use.


If caught in violation of the law, employers and employment agencies face fines of up to $500 for the 1st violation, and fines between $500 to $1,500 for each subsequent violation. Plus, they may be exposed to a discrimination lawsuit with compensatory damages, punitive damages, penalties and attorneys' fees being awarded to the victim. If you believe that you were discriminated against by an AI / AEDT, your lawyer will be able to determine it's involvement during the lawsuit and leverage the company's non-compliance with the NYC Bill to win your case.