AvalaAvala
Book a Demo

Ethics Policy

Last updated: March 8, 2026

Avala builds data infrastructure for Physical AI and frontier models. Our platform powers safety-critical systems — from autonomous vehicles to robotics — where the quality and integrity of training data directly affects human safety. This responsibility shapes every decision we make.

We publish these principles to hold ourselves accountable to our customers, our global workforce of annotators, and the broader AI community.

Our Principles

1. Build for Safety

AI systems trained on our data operate in the physical world. We treat annotation quality as a safety requirement, not a metric. We maintain rigorous quality controls, audit trails, and verification processes because errors in training data can lead to real-world harm.

We will not accept work that is designed to cause harm, violate human rights, or enable mass surveillance of individuals.

2. Eliminate Bias

Training data reflects the choices of the people who create it. We actively work to identify and remove bias from our datasets across all protected characteristics, including race, ethnicity, gender, age, disability, socioeconomic status, sexual orientation, and political or religious belief.

We invest in diverse annotation teams, structured review processes, and bias detection tooling. We acknowledge that eliminating bias is an ongoing effort, not a one-time fix.

3. Treat Our Workforce Fairly

Avala employs over 15,000 annotators across three continents. We reject the exploitative labor practices that have defined much of the data labeling industry.

We commit to:

  • Fair compensation — wages that meet or exceed local living standards, not race-to-the-bottom pricing
  • Benefits and stability — healthcare, paid time off, and consistent work where possible
  • Career development — training programs that build specialized skills and upward mobility
  • Safe working conditions — content moderation protocols that protect annotators from harmful material, with psychological support resources available

4. Be Transparent

Our customers have the right to understand how their data is processed. We maintain full audit trails from raw data to final annotation. We do not use opaque subcontracting chains. When AI-assisted tools are part of the annotation pipeline, we disclose this clearly.

We are open about our methods, our limitations, and our mistakes.

5. Protect Privacy

We handle sensitive data — medical imagery, street-level scenes, personal information. We follow strict data handling protocols, maintain SOC 2 compliance, and adhere to GDPR, CCPA, and other applicable privacy regulations. Customer data is segregated, access-controlled, and never used beyond the scope of the agreed engagement without explicit consent.

See our Privacy Policy for details.

6. Align AI with Human Values

We believe AI development should be guided by human values. Through our alignment work, we contribute to AI safety research and responsible development practices. We use human feedback to help align AI systems with human intent — not just optimize for narrow performance metrics.

7. Share What We Learn

We contribute to the broader AI safety and data quality conversation through published research, open standards participation, and collaboration with customers and partners. We believe the industry benefits when best practices are shared rather than hoarded.

Governance

These principles are reviewed annually by Avala's leadership team and updated as our understanding of AI ethics evolves. Our Data Protection Officer oversees compliance with privacy and ethical data handling requirements.

We welcome feedback on this policy. Contact us at ethics@avala.ai.