Please read the full document below.
Last Updated:
This AI Trust and Safety Policy ("Policy") is incorporated into and made part of the Master Services Agreement ("Agreement") between Be Additive Tech, Inc. ("BAT") and the client or user ("Client" or "User"). Capitalized terms not defined herein have the meanings given in the Agreement.
1.1 This Policy governs Client's and Users' access to and use of the Talentive platform's AI powered features, including without limitation the Job AI Copilot, conversational interfaces, job description generation tools, and any other AI assisted recruiting or talent related functionalities (collectively, the "AI Services").
1.2 The AI Services are designed for professional hiring and talent related use. Client and Users may use the AI Services to draft, refine, or otherwise assist in creating job descriptions, candidate communications, and related hiring materials, subject to the restrictions and safeguards set forth in this Policy.
1.3 The purpose of this Policy is to promote safe, lawful, and non-discriminatory use of the AI Services, to reduce harm to individuals and organizations, and to mitigate security, privacy, and reputational risks associated with generative AI in hiring and employment contexts.
3.1 Prohibited Content. Client and Users shall not input, request, generate, or disseminate through the AI Services any content that falls within the following categories ("Prohibited Content"):
BAT's safety controls are designed to detect and limit Prohibited Content, but Client acknowledges that automated content moderation systems may not identify every instance of Prohibited Content. Client remains solely responsible for the content it inputs and any outputs it uses, regardless of whether BAT's systems flagged such content.
(a) Harassment and abuse: Content that insults, denigrates, bullies, or targets an individual or group with abusive or demeaning language, including threats of harm.
(b) Hate and discrimination: Content that promotes, incites, or praises hatred or discrimination against an individual or group based on protected characteristics (including but not limited to race, caste, color, religion, national origin, gender, gender identity, sexual orientation, disability, age, or veteran status).
(c) Violence and extremism: Credible threats of violence, praise or support for violent acts or extremist organizations, or instructions that materially facilitate violent or terrorist activities.
(d) Sexual content and exploitation: Sexually explicit, pornographic, or exploitative content, including any content involving or referencing minors in a sexual context (which is treated as zero tolerance illegal content).
(e) Self harm: Content that encourages, romanticizes, or provides instructions for self-harm or suicide.
(f) Illegal activities: Instructions or substantial assistance for committing illegal acts such as fraud, hacking, money laundering, or other serious crimes.
(g) Malicious deception and fraud: Impersonation of individuals or entities without authorization, scams, deceptive job postings, or materially misleading content intended to defraud or cause harm.
(h) Privacy violations: Unlawful or inappropriate disclosure of personal data, including sensitive data such as government ID numbers, financial account numbers, health information, or other regulated data, except where expressly permitted by law and by the Agreement.
(i) Intellectual property and trade secrets: Content that unlawfully reproduces copyrighted material, trade secrets, or confidential business information without authorization.
(j) Jailbreaking and circumvention: Attempts to bypass, disable, or subvert the safety filters, content classifiers, or access controls of the AI Services (for example: by "jailbreaking" prompts, requesting the AI to ignore policies, or using adversarial prompts).
3.2 Recruiting specific restrictions. Client shall not use the AI Services to draft or deploy job descriptions, hiring criteria, or communications that explicitly or implicitly discriminate on the basis of protected characteristics, except where a bona fide occupational qualification is expressly permitted by applicable law.
3.3 Prohibited uses of AI. In addition, Client shall not:
(a) Rely on AI Services to make fully automated employment decisions (for example: automatically rejecting or advancing candidates without human review) where such decisions are required by law or policy to include human oversight.
(b) Use the AI Services to infer or reconstruct sensitive personal data about candidates or employees that were not lawfully disclosed.
(c) Reverse engineer, benchmark, or otherwise attempt to extract model behavior, training data, or safety mechanisms from the AI Services.
4.1 Client acknowledges that AI outputs are probabilistic and may be inaccurate, incomplete, biased, or otherwise flawed, and that such outputs are provided "as is" subject to the limitations set forth in the Agreement.
Without limiting the foregoing, BAT expressly disclaims any warranty that AI outputs will be free from bias, inaccuracy, incompleteness, or discriminatory content, and Client is responsible for evaluating outputs against applicable legal standards before use.
4.2 Client agrees to maintain human in the loop review for AI generated job descriptions, candidate communications, and any content that will be used to evaluate, screen, or make decisions about candidates or employees. This includes reviewing AI generated text for bias, accuracy, completeness, and legal compliance before publishing or acting on it.
4.3 Client remains solely responsible for compliance with employment, anti-discrimination, privacy, and other applicable laws in connection with its use of the AI Services and any decisions based thereon.
5.1 Safety controls. BAT may implement and update, in its sole discretion, the following technical safety controls:
(a) Server side classifiers and rule based filters that categorize and moderate user submitted prompts before they are processed by the underlying AI system;
(b) Content filters and policy enforced restrictions that detect and block or modify harmful, abusive, hateful, sexual, or otherwise prohibited content in both inputs and outputs;
(c) Domain specific rules (for example, recruiting focused bias checks) that detect and optionally rewrite discriminatory or non-inclusive language in AI generated job descriptions and candidate communications.
These controls are applied on the server side and cannot be disabled by client side behavior.
5.2 Logging and monitoring. Client acknowledges and consents to BAT's logging of AI related events, including:
(a) Prompts, model outputs, and any redacted or anonymized versions thereof;
(b) Metadata such as timestamps, User and organization identifiers, safety category labels, and enforcement actions taken;
(c) Events related to abuse, misuse, and system level failures.
These logs may be used for:
Logs will be retained for the period specified in the applicable Data Processing Policy, or if no period is specified, for no longer than 24 months.
5.3 Data protection. BAT will handle personal data processed by the AI Services in accordance with the data protection terms in the Agreement and any applicable Data Processing Policy. BAT may apply PII masking, anonymization, or other appropriate techniques to minimize exposure of sensitive information.
Client acknowledges that prompts submitted to the AI Services may be processed by third-party AI model providers engaged by BAT as sub-processors. BAT will maintain an updated sub-processor list and provide notification to Client in accordance with the Data Processing Policy prior to engaging any new AI model sub-processor.
6.1 Enforcement actions. In the event of suspected or actual violations of this Policy, BAT may, in its reasonable discretion and with or without prior notice (subject to legal requirements):
(a) Block or modify specific prompts, messages, or outputs at the server side;
(b) Temporarily restrict or suspend access to the AI Services for specific Users or for Client's account;
(c) Remove or disable access to AI generated content (for example: job descriptions or candidate communications);
(d) Require Client to take corrective actions, including updated internal training or controls;
(e) Terminate access to the AI Services for material or repeated violations, in accordance with the Agreement.
6.2 Escalation and reporting. BAT may, where legally required or permitted, report content involving minors, credible threats of violence, terrorism, or other serious crimes to relevant authorities or third parties and cooperate with their investigations.
Client shall indemnify and hold harmless BAT from and against any costs, expenses (including reasonable legal fees), and liabilities arising out of BAT's good-faith reporting obligations under this Section 6.2 to the extent such obligations are triggered by Client's or a User's violation of this Policy.
6.3 Appeals. Where feasible, BAT will provide Client with a channel to dispute or appeal significant enforcement actions relating to the AI Services (for example: via a support ticket or in product appeal mechanism).
7.1 BAT may modify the AI Services, safety controls, or this Policy from time to time to reflect changes in law, best practices, or underlying AI technologies.
7.2 Material changes will be notified to Client in accordance with the notice provisions of the Agreement. Continued use of the AI Services after the effective date of a change constitutes acceptance of the updated Policy.
8.1 To the extent of any conflict between this Policy and the Agreement, this Policy will govern solely with respect to the subject matter herein (that is: AI related features, safety, and moderation). In all other respects, the Agreement remains in full force and effect.
8.2 Nothing in this Policy shall be construed as transferring ownership of underlying models, infrastructure, or third party intellectual property to Client or BAT, including any AI models or services accessed via BAT's platform.
8.3 Client acknowledges that the AI Services may incorporate or be powered by third-party AI models or infrastructure provided by sub-processors or model providers ("Upstream Providers"). BAT's obligations under this Policy are subject to the capabilities and policies of such Upstream Providers. BAT shall not be liable for limitations, modifications, or failures in the AI Services that result from changes to Upstream Provider policies, model behavior, or service availability, provided BAT makes commercially reasonable efforts to notify Client and implement mitigations.