SHIFT GROUP

Generative AI Utilization Policy

Effective Date: March 1, 2026
Responsible Department: AI Management Office / Compliance Committee

1. Purpose

This policy establishes the fundamental principles and compliance requirements for utilizing Generative AI in the business operations of SHIFT Inc. (hereinafter “the Company”). Its purpose is to prevent risks such as information security incidents and rights infringements while enhancing operational efficiency and quality improvement.

2. Scope of Application

  • (1)Applicable Persons: Officers, employees (including full-time employees, contract employees, part-time employees, etc.), temporary staff, and employees of partner companies engaged in the Company’s business (hereinafter referred to as “Users”).
  • (2)Applicable Tools: All generative AI services selected by the Company that generate text, images, program code, audio, etc. (e.g., ChatGPT, GitHub Copilot, Gemini, Midjourney, DeepL, Devin, etc.).

3. Basic Principles

  • (1)Supportive Use: Generative AI is solely an auxiliary tool to support work. Users must understand that the responsibility for the quality of final deliverables rests with the human user.
  • (2)Compliance with Laws and Contracts: Prioritize compliance with laws such as the Copyright Act and the Personal Information Protection Act, as well as contracts with clients (e.g., non-disclosure agreements).
  • (3)Ensure Transparency: Where necessary, Users appropriately disclose and record the fact that Generative AI was used and the scope of such use (including cases involving external data integration technologies such as RAG).
  • (4)Fairness and Respect for Human Rights: Users recognize that AI training data and outputs unconscious biases or inappropriate expressions related to gender, age, nationality, or other attributes.. Do not use discriminatory or unethical outputs in business operations.

4. Usage Environment and Security (Input Data Restrictions)

Users must strictly comply with the level of information permitted for input based on the security level of the applicable tool (whether input data is used for AI training, security audits of service providers, etc.).

Level Usage Environment Conditions Permitted Input Information Prohibited Input Information
Standard AI tools authorized for use within the business environment provided by the Company
  • Publicly available information
  • General content (e.g., research on general technical information)
  • Translation (excluding confidential information)
  • Client and project-related information
  • Unpublished information (unpublished source code, specifications, etc.)
  • Personal information
  • Internal confidential information
Confidential In addition to the above conditions, AI tools (including in-house AI tools) must meet the following requirements:
  • Input data must not be used for training
  • The provider implements information security and information management measures deemed reasonable by the Company
  • An appropriate license agreement is in place
Information other than that listed on the right Information excluding “Prohibited Input Information.” However, this is limited to the extent not violating laws and regulations. (Example: Permitted Information)
  • Personal information
  • Internal meeting minutes
  • General business documents
  • Client project information (※ )
  • Highly confidential management information.
  • Information explicitly prohibited from input by clients/users.
Outside the company AI usage within the client environment, as permitted by the client Items permitted by the client Prohibited by the client

(※ ) This applies only when the use of AI in a secure environment is permitted under the contract with the client.

5. Special Provisions for Software Testing and Development Work

Considering the nature of our business, the following rules must be followed for technical work.

  • (1)Prior Explanation and Consent from Clients:
    Client operations (When using generative AI in testing, outsourced development, etc.), the following must be explained to the client beforehand: “the AI tools to be used,” “the scope of input data,” and “security measures.” Obtaining explicit consent (permission) through contracts or similar means is the principle.
  • (2)Handing of Code:
    •  ⚪︎ When inputting client source code into the AI tool, do not input unnecessary information. Perform masking processing as needed, such as replacing variable names or function names.
    •  ⚪︎ Inputting customer production data as “seeds” for AI test data generation is prohibited.
    •  ⚪︎ However, this excludes information that can be input within the confidential environment specified in “4. Usage Environment and Security (Input Data Restrictions).”
  • (3)Handling of Personal Information:
    •  ⚪︎ Inputting personally identifiable information (names, addresses, phone numbers, etc.) provided by client during technical operations is prohibited (inputting actual data is strictly forbidden, even for creating dummy data).
    •  ⚪︎ However, this excludes information that can be input within the confidential environment specified in “4. Usage Environment and Security (Input Data Restrictions)”.
  • (4)Verification (Review) and Application of Generated Output:
    •  ⚪︎ Test code, scripts, design documents, etc., generated by AI must always be verified by a human with specialized knowledge to confirm the accuracy of the content, security (presence of vulnerabilities), and expected behavior.
    •  ⚪︎ Fact-checking must be performed to ensure the output does not contain AI-generated “hallucinations” (plausible falsehoods).
    •  ⚪︎ As a general rule, it is prohibited to automatically deploy AI outputs to production environments or similar settings without prior human review.
  • (5)Test Result Evaluation:
    •  ⚪︎ Do not rely solely on AI for pass/fail determinations of test results. Use AI only as an auxiliary tool for judgment; final decisions must be made by humans.
  • (6)Delivery Notes:
    •  ⚪︎ Do not deliver AI-generated content to clients by concealing its AI origin and falsely claiming it was “created from scratch by oneself.”

6. Handling of Intellectual Property Rights and Copyrights

  • (1)Preventing Infringement of Others’ Rights:
    (Input): Users shall not input third-party copyrighted works (e.g., existing images, articles, source code) into prompts for the purpose of intentional imitation or modification without the rights holder’s permission, except where permitted by law. (Output): Users shall reasonably confirm that AI-generated deliverables are not substantially similar to existing copyrighted works or trademarks in a manner that may violate any applicable laws.
  • (2)Ownership of Rights: Rights to AI-generated outputs created in the course of business shall, in principle, be treated as belonging to the Company (or the client based on contract).

7. Prohibited Actions

Users must not engage in the following acts in the course of business.

  • (1)Inputting business data into generative AI tools not permitted by the Company (shadow IT). If a User wishs to use a new generative AI tool not permitted by this policy for business purposes, they must apply to the responsible department in advance and obtain security evaluation and usage permission.
  • (2)Using the applicable generative AI tools with any account other than the account assigned by the user’s organization (including shadow IDs or IDs obtained outside prescribed organizational procedures).
  • (3)Using AI tools for improper or illegal purposes, such as creating malware or exploring cyberattack methods.

8. Response to Violations

  • (1)Incident Reporting: If Users discover an incident (including potential incidents) such as suspected information leakage, unintentional input of confidential information, or serious rights infringement/system failure caused by AI-generated outputs, immediately cease use of the generative AI and promptly report it to the security related department.
  • (2)If a User violates this policy and causes damage to the Company or clients, such User may be subject to disciplinary action in accordance with employment regulations and contracts, and the Company may seek compensation for damages.
    When the Company judges that a User is suspected of violating this policy, the Company may investigate the User’s usage status of the target generative AI tools, and the User must cooperate with this.

9. Revision of this Policy

The Company reserves the right to amend this policy as the Company deems necessary, considering changes to laws and regulations governing the utilization of generative AI and shifts in the environment.

End

PAGE TOP TOP PAGE