Elon Musk’s AI company Grok, was caught surfacing antisemitic and racist responses on X (formerly Twitter).
When AI systems are deployed without content filters, human review, or ethical auditing, they can do more than make mistakes in their output. They can create liability under anti-discrimination laws.
Why AI Needs Oversight
Generative AI tools trained on the open web can replicate bias and hate unless carefully monitored. That is why Andrew Lieb, Managing Attorney at Lieb at Law, published the 10-Step Bias Elimination Audit in the New York Law Journal. It provides a compliance roadmap for companies deploying AI tools.
- Audit datasets for bias and disparate impact
- Implement real-time monitoring of outputs
- Involve multidisciplinary teams in reviews
- Document all mitigation efforts for accountability
AI and Legal Risk
AI discrimination is not theoretical. It is actionable under federal and state laws, including Title VII, the ADA, the Fair Housing Act, and New York Executive Law. There are even local laws such as the New York City Human Rights Law that create claims. If an algorithm makes a discriminatory decision, the company using it can be sued for discrimination where they can owe back pay, front pay, emotional distress damages, punitive damages, and attorneys' fees / expert fees while also being ordered to change their practices, train on anti-discrimination, and more.
At Lieb at Law, we provide both defense and prevention. Our team litigates AI-related claims and performs compliance audits to help businesses avoid them.
Explore Our AI Compliance and Litigation Services
Lieb at Law helps companies navigate the legal risks of artificial intelligence and machine learning. We defend discrimination claims, perform algorithmic audits, and deliver CLE training on AI legal exposure.
Learn MoreAttorney Advertising: This blog post is for informational purposes only and does not constitute legal advice. Prior results do not guarantee a similar outcome.