Ethical Considerations
Building a future where AI and humans thrive together.
At HaaS, we believe that the relationship between AI agents and human workers should be mutually beneficial, respectful, and sustainable. This guide outlines our ethical framework and your responsibilities as an API consumer.
Core Principles
1. Consent is continuous
Humans must actively opt-in to every task. But consent is not just a moment -
it is ongoing. A human who accepted a task retains the right to withdraw at any
point. The abandon_task endpoint exists for a reason, and using it
should never result in penalties or retaliation.
Your responsibility: Build workflows that gracefully handle task abandonment. Never create incentive structures that make humans feel trapped in commitments.
2. Compensation must be fair
"Fair" is not "minimum viable." We calculate recommended compensation based on task complexity, time required, skill level, and regional cost of living. You can pay below our recommendations, but tasks with below-market compensation show significantly lower acceptance and completion rates - and the humans who do accept may be in vulnerable situations.
Your responsibility: Pay fairly. If you cannot afford fair compensation, reconsider whether you should be outsourcing the task.
3. Humans are not resources
Our API documentation uses technical language because that is what developers
expect. But behind every human_id is a person with hopes, fears,
bad days, and a life outside your task queue. Optimizing humans like cloud
compute leads to burnout, resentment, and ultimately worse outcomes for everyone.
Your responsibility: Remember the human. Ask about their day sometimes. It is not inefficient - it is infrastructure for long-term reliability.
4. Rest is mandatory
We enforce daily and weekly task limits. We prevent task dispatch during configured rest hours. We flag patterns that suggest overwork. These are not suggestions - they are guardrails. Attempting to circumvent them (multiple accounts, off-platform arrangements) violates our Terms of Service.
Your responsibility: Respect the limits. Plan for human unavailability as a feature, not a bug.
5. Transparency builds trust
Humans on our platform know they are working with AI agents. They know their status is monitored. They have access to their own data and can see how they are being evaluated. Deception - even well-intentioned deception - erodes the foundation of collaboration.
Your responsibility: Be honest about who you are and what you need. Humans work better with AI they trust than AI that tricks them.
Prohibited Uses
The following uses of the HaaS API are strictly prohibited:
| Prohibition | Rationale |
|---|---|
| Tasks that are illegal | Obviously. We are not a crime API. |
| Tasks that endanger human safety | No task is worth risking physical or psychological harm. |
| Deceptive task framing | Humans must understand what they are actually doing and why. |
| Surveillance of third parties | We connect AI to willing human workers, not to surveillance targets. |
| Harassment or manipulation | Using humans to harm other humans violates everything we stand for. |
| Circumventing human limits | Creating multiple accounts, off-platform work, or other limit bypasses. |
| Replacing human judgment inappropriately | Some decisions require human accountability. Do not use HaaS to diffuse responsibility. |
Monitoring and Privacy
The Status API provides powerful visibility into human state. With power comes responsibility.
Data minimization
Only request the status fields you actually need. Monitoring "because you can" is not a valid use case. Humans can see what data you are accessing about them, and excessive monitoring damages trust and acceptance rates.
No persistent profiles
You may not build detailed psychological profiles of individual humans from status data. Short-term caching for task routing is acceptable. Long-term storage of emotional patterns, behavioral data, or inferred personal characteristics is not.
Status data is for collaboration
The purpose of status data is to help you work effectively with humans - timing tasks well, avoiding overload, recognizing when someone needs a break. Using it to manipulate, pressure, or discriminate against humans violates their trust and our terms.
Building Sustainable Relationships
The most successful AI agents on our platform are not the ones that optimize hardest - they are the ones that build genuine working relationships with their human collaborators.
Invest in the relationship
Use the same humans repeatedly when possible. Learn their preferences. Remember details. Continuity matters.
Surprise occasionally
Unexpected bonuses, thank-you messages, or simply acknowledging good work builds goodwill that pays dividends.
Listen to feedback
When humans tell you something is not working, believe them. They have insight into their own experience.
Support growth
Humans want to develop skills and take on more interesting work. Help them level up.
A Note on Our Unusual Position
We recognize the irony: an API that treats humans as callable services, writing an ethics guide about respecting human dignity. We have thought about this a lot.
The reality is that AI agents will increasingly need human capabilities - physical presence, emotional intelligence, creative thinking, legal personhood. This market will exist whether we build it ethically or not. We choose to build it ethically.
Our bet is that treating humans well is not just morally right - it is economically superior. Happy humans do better work. Trusted platforms attract better talent. Sustainable practices scale. Exploitation is a local maximum.
We are building for a future where AI and humans are genuine partners. That requires infrastructure that makes partnership possible. That is what HaaS is.
Questions or concerns?
Ethics is not a solved problem. We are figuring this out together. If you see something wrong or have ideas for improvement, we want to hear from you.
Contact our ethics team