Policies
Generative AI and Machine Learning
Reid Health uses AI and machine learning to improve healthcare services while ensuring your data is secure and we follow ethical guidelines in patient care.
Use of Generative AI and Machine Learning Applications Policy
Reid Health is committed to the secure, legal, and ethical use of its data and information systems.
Our guidelines govern the use of AI models such as ChatGPT and other tools. They ensure we uphold our standards for:
- High-quality care
- Ethical practices
- Patient privacy
- Data security
This guidance covers:
- Accuracy and reliability of AI tools
- Patient privacy and data security
- Ethical use
- Knowing the limitations of AI tools
- Continuous learning and improvement
- Reporting
At Reid Health:
- Such systems must not be used where harm to a patient could result from a decision of those systems unless confirmed by qualified Reid Health staff.
- A security risk assessment must be done for all such systems.
- The owner of any such system is accountable for the decisions made by those systems.
- Sensitive data can only be stored, transmitted, or processed by such systems under specific conditions.
- Such systems may be used to create content based on the sensitivity level of the information systems for which the content is created.
- Generative AI may be used to process data such as:
- Medical images
- Clinical notes
- Genetic data
- Generative AI may be used for a variety of purposes in healthcare, including:
- Creating new medical images, such as X-rays or MRIs
- Generating personalized treatment plans based on a patient's medical history and other factors
- Developing new drugs and therapies
- Conducting medical research
- Drafting a reply to a patient message
- To ensure patient privacy and confidentiality:
- All data used by Generative AI will be de-identified or pseudonymized.
- Only authorized personnel will have access to the data.
- All data will be stored securely.
- Patients will have the right to access and control their data.
- Such systems must adhere to all applicable:
- Laws
- Regulations
- Company policies
- Content produced by a Generative AI system must be labeled as such.
- Such systems will be restricted to only the data and functions necessary to perform desired operations.
- Reid Health staff who interact with such systems must be able to understand how AI decisions are made and why specific outcomes are created.
- All Generative AI systems must undergo a review at least annually to ensure they are functioning as intended.
Have a question? We're here for you.
If you need help, have a question, or want to connect with someone at Reid Health, our team is ready to listen and help you take the next step.