As AI becomes part of daily work, the question is no longer only,
“Can this tool help me?”
It also becomes,
“Should I use it here, and if so, how?”
Powerful tools need clear boundaries. The same system that can speed up your reports can also expose sensitive information if used carelessly, reinforce bias if fed poor data, or damage trust if you present its output as unquestionable truth.
This section introduces practical guardrails that any professional can follow, regardless of role or industry. It focuses on four core areas:
- Privacy and confidentiality: What you should never paste into public tools, and how to work safely with sensitive information.
- Accuracy and verification: How to check AI generated content, especially in regulated or high stakes environments.
- Ethics and bias awareness: How to recognise where AI may reflect unfair patterns from its training data, and what you can do about it.
- Governance and workplace rules: How to align your personal use of AI with company policies, sector regulations, and professional standards.
The goal is not to create fear, but to give you clear, usable principles so that you can take advantage of AI confidently without exposing yourself, your clients, or your organisation to unnecessary risk.
By the end of this section, you should be able to:
- Decide when AI is appropriate, and when it is not.
- Protect confidential and personal data while using AI tools.
- Explain to colleagues how you use AI, and why your approach is safe and responsible.
Effective use of AI depends on two things: what the system can do and how wisely a person chooses to use it. This section is designed to strengthen that human judgment, so that you can direct AI in a thoughtful, responsible, and skilful way.
6.1 What Not to Put Into AI Tools
When AI becomes part of everyday work, privacy and confidentiality must come first. The aim is not only to work faster, but to protect people and organisations while you do so. The following principles provide a practical baseline that any professional can apply.
1. Do not share sensitive personal data in unapproved tools
As a starting point, you should never enter sensitive personal information into general purpose AI tools unless your organisation has formally approved that specific system and confirmed that it complies with relevant laws.
Sensitive personal data includes, for example:
- Full names combined with contact details and identification numbers.
- Health information, diagnoses, or treatment details.
- Information about a person’s finances, employment status, or legal situation.
- Data about children, vulnerable adults, or protected groups.
Public or general AI tools may store prompts, use them for model improvement, or retain logs in ways you cannot control. Even when a provider promises strong protection, you still have an obligation to respect privacy and follow your organisation’s rules.
A safe rule of thumb is this:
If you would not send the information to a large external mailing list, do not paste it into a public AI tool.
Where sensitive data must be processed by AI, this should only happen through approved, compliant platforms that your organisation’s legal, privacy, and security teams have evaluated.
2. Keep confidential company information out of public systems
Many work documents contain confidential business information, even if they do not mention individuals. Examples include:
- Internal strategy decks and product roadmaps.
- Pricing models, margin analyses, and financial forecasts.
- Source code, system diagrams, and security configurations.
- Internal incident reports, audit findings, or legal opinions.
Uploading such material into an unvetted AI tool can:
- Expose trade secrets.
- Create legal or contractual breaches with clients and partners.
- Undermine regulatory obligations in sectors such as finance, healthcare, or government.
You can still use AI to help with this kind of work, but you should do so inside environments that your organisation controls. For example, by using an internal AI assistant that has been integrated into your office suite, or a private deployment like Cyrenza that keeps data within your company’s infrastructure.
If you are not sure whether a document is confidential, treat it as confidential, and do not paste it into public tools.
3. Protect high risk documents such as contracts, health records, and internal financials
Certain document types are simply too sensitive for casual use in public AI systems:
- Contracts and legal agreements contain obligations, liabilities, and confidential terms between parties.
- Health records and clinical notes contain highly personal information that is protected by strict laws and ethical duties.
- Internal financial records such as general ledgers, payroll lists, or unpublished results can move markets or affect livelihoods.
Analysing these documents with AI can be very useful, for example to extract clauses, summarise risks, or identify anomalies, but this should only happen in vetted, compliant tools that your organisation has licensed for this purpose.
If you want to use AI to help with such material, the safe approach is:
- Work with anonymised or synthetic examples where possible.
- Redact names, identifiers, and amounts before you paste anything into a general tool.
- Ask your IT or data protection team about secure alternatives that are already available internally.
4. Ask your organisation for clarity
Responsible use of AI is not only an individual decision. It is also a policy and governance question. You should feel comfortable asking your organisation for clear guidance.
Useful questions include:
-
“Which AI tools are officially approved here?”
This helps you distinguish between systems that have been checked and those that are still personal experiments.
-
“What are our rules for sharing data with these tools?”
For example, whether personal data is allowed, which classifications of documents are prohibited, and whether logs are kept.
-
“Do we have an internal or private AI assistant that I should use instead of public tools?”
Many organisations are now deploying internal systems precisely to give staff a safer alternative.
-
“Who can I contact if I am unsure about a particular use case?”
This might be someone in IT security, data protection, compliance, or a designated AI governance group.
By asking these questions early, you protect yourself and your organisation, and you help shape sensible policy. AI skills are not only about knowing what the tools can do, but also about knowing where to draw the line.
6.2 Recognising AI’s Limits
When AI tools work well, it is easy to forget that they are still statistical systems, not authorities. They generate language that sounds fluent and confident, but confidence is not the same as correctness.
1. Why AI output must never be accepted blindly
There are three core realities you should always keep in mind:
-
AI can be wrong, confident, and subtle in its errors.
A system may invent references, misquote figures, misinterpret context, or generalise from the wrong pattern. The answer can look polished and professional while still being inaccurate in small but important ways.
-
AI does not replace fact checking.
It can assist with finding sources, summarising documents, or listing possible explanations, but it does not remove your duty to check figures, dates, names, legal clauses, or scientific claims against trusted material.
-
AI does not carry legal or ethical responsibility. You do.
If an AI generated statement harms a client, misleads a colleague, breaches a regulation, or damages trust, it is the human who used it who will be held accountable, not the tool.
Thinking of AI as a fast assistant rather than a final authority is the safest mindset.
2. A simple checklist before you use AI output
Before you send, publish, or implement anything that was significantly shaped by an AI system, pause and run a short internal check. Three questions can help you decide whether you need to slow down and verify details more carefully.
Question 1: Is this factually critical
Ask yourself:
- Does this text contain numbers, dates, statistics, or names that matter.
- Would a mistake here change an important decision, a diagnosis, a price, or a policy.
If the answer is yes, you should:
- Cross check with original documents, databases, or reputable sources.
- Avoid citing facts that you cannot confirm independently.
AI is helpful for drafting and structure, but factual claims need separate confirmation.
Question 2: Is this legally sensitive
Consider whether the output:
- Interprets laws, regulations, contracts, or compliance obligations.
- Describes rights, responsibilities, or liabilities for your organisation or others.
- Could be read as official legal or regulatory advice.
If so, treat the AI output as a draft for internal thinking only, not as a final position. Have a qualified person, such as legal counsel, compliance staff, or an authorised manager, review and approve the content before it goes anywhere outside a small internal circle.
Question 3: Is this going to an external stakeholder
External stakeholders include:
- Clients, customers, patients, or students.
- Regulators, auditors, or partner organisations.
- The general public, through websites, reports, or social media.
If the AI generated content will leave your internal environment, your standard should be higher. Check:
- Tone and clarity, to ensure it reflects your organisation properly.
- Accuracy and fairness, particularly if you are describing other people, companies, or groups.
- Consistency with existing commitments, contracts, or public positions.
3. What to do when the answer is “yes”
If you answer “yes” to any of the three questions, the safest next step is:
- Slow down.
Do not send or publish immediately. - Verify.
Compare the AI output against reliable sources, internal records, or expert opinion. - Edit.
Rewrite or remove parts that you cannot confidently defend. - Own the result.
Make sure you are comfortable putting your name or your organisation’s name on the final version.
If the answer to all three questions is “no” for a particular piece of content, you can afford to move more quickly, while still applying basic judgment. For example, a personal brainstorming note or a rough internal idea draft does not need the same level of scrutiny as a letter to a regulator.
The guiding principle is simple:
Use AI to move faster, but do not let it decide when you can stop thinking.
Your value in an AI enabled workplace comes from the quality of your judgment, not from your ability to repeat what the system has produced.