AI Use Policy

How we use artificial intelligence in our consulting practice.

Version 1.0  |  Effective March 2026

1. Purpose and Scope

This policy sets out how Coastal Cyber Pty Ltd uses artificial intelligence (AI) tools in our consulting practice. It applies to all work we perform - whether for clients, in our own operations, or in content we publish publicly.

We use AI to work efficiently and deliver better outcomes for our clients. This policy exists to make our approach transparent and to ensure AI use never compromises the quality, confidentiality, or integrity of our work.

Our position on AI

AI is a tool, not a substitute for professional judgement. Every deliverable that leaves Coastal Cyber - regardless of how it was drafted - reflects our expertise and is our responsibility. We own it.

2. AI Tools We Use

Coastal Cyber currently uses the following AI platform in our practice:

ToolProvider / Notes
ClaudeAnthropic (claude.ai) - primary AI assistant for drafting, research, and analysis

We evaluate any new AI tools before use against the criteria in Section 6. This list is reviewed at least annually.

3. How We Use AI

3.1 Permitted uses

We use AI assistance for the following activities:

  • Drafting reports, policies, procedures, and client communications - all reviewed and edited before delivery
  • Research: summarising frameworks, standards, and regulatory guidance - always verified against primary sources
  • Generating templates and structured content for client engagements
  • Summarising meeting notes, lengthy documents, or technical material
  • Content creation: LinkedIn posts, articles, and newsletter drafts
  • Internal operations: scheduling, planning, process documentation

3.2 What AI does not do

AI does not perform the following functions in our practice:

  • Make risk decisions, risk ratings, or security recommendations without human review and sign-off
  • Access, process, or store client systems, credentials, or live data environments
  • Communicate directly with clients on our behalf
  • Generate content that is published or delivered without review

The judgement boundary

Delegation to AI stops where professional judgement begins. Risk assessments, control recommendations, strategic advice, and compliance opinions are always authored by a qualified human consultant. AI may draft; we decide.

4. Confidentiality and Data Handling

4.1 What we do not enter into AI tools

The following categories of information are never entered into AI platforms under any circumstances:

  • Client names, entity names, ABNs, or identifying details - unless using enterprise-grade API with no training on inputs
  • Personally identifiable information (PII) relating to clients or their staff
  • Confidential technical details: network architecture, system configurations, vulnerability assessment data
  • Contract terms, pricing, or commercially sensitive client information
  • Information subject to legal privilege or regulatory restriction

4.2 How we protect confidentiality in AI workflows

Where AI assistance is used on client-adjacent work, we apply the following controls:

  • Anonymise or generalise client-specific details before entering content into AI tools
  • Use generic scenarios and hypothetical examples rather than real client data
  • Do not rely on AI platform privacy policies as a substitute for our own access controls

4.3 Data residency

Claude (Anthropic) processes data on infrastructure operated by Anthropic. We are aware of applicable obligations under the Australian Privacy Act 1988 and the APP framework regarding cross-border data flows. We do not enter personal information into AI tools in a form that would trigger notification or consent requirements.

Clients with specific data sovereignty requirements should advise us at engagement commencement. We will adjust our workflow accordingly.

5. Quality Control

AI-assisted work does not leave this practice without human review. Our quality standard is the same regardless of how content was produced.

Output typeReview standard
Client reports and deliverables Full review: accuracy, completeness, tone, and compliance with engagement scope
Framework and regulatory references Verified against primary source before inclusion in any deliverable
Templates and internal documents Reviewed at creation; re-reviewed before each client use
Published content (articles, LinkedIn, newsletter) Reviewed for factual accuracy, professional appropriateness, and alignment with our positioning
Risk ratings and control recommendations Independently derived - AI output used for drafting only, not for the underlying assessment

We do not use AI-generated content as a substitute for our own research or analysis. If we cannot verify a statement from a primary source, it does not go into a client deliverable.

6. AI Tool Evaluation Criteria

Before using any new AI tool in our practice, we assess it against the following criteria:

CriterionMinimum standard
Data privacy policy Clear terms on data retention, training use, and cross-border processing
Opt-out from training Ability to disable use of inputs for model training (required)
Data residency Documented processing locations; acceptable for Australian Privacy Act compliance
Security posture SOC 2 Type II or equivalent; published security documentation
Vendor stability Established commercial entity with documented business continuity
Access controls MFA support; audit logging available

Any tool that does not meet these criteria is not used in client-facing or confidential work, regardless of its capability.

7. Ethical Considerations

7.1 Issues most relevant to our field

We operate in cybersecurity - a field where bad advice can cause material harm. The ethical issues we consider most significant in our AI use are:

  • Accuracy: AI can produce plausible but incorrect technical content. Errors in security guidance can create false confidence or missed risk.
  • Bias: AI models may reflect biases in their training data. We do not rely on AI for threat assessments that could stereotype organisations by sector, size, or geography.
  • Over-reliance: the risk that efficiency gains from AI erode the depth of analysis clients are entitled to expect from a senior consultant.
  • Provenance: the risk that clients do not know AI was used, which affects their ability to assess the basis for our advice.

7.2 Decision-making criteria for ethical dilemmas

When we face a situation where AI use creates ethical uncertainty, we apply the following questions in order:

  1. Would the client reasonably expect to know AI was used in producing this? If yes, disclose.
  2. Could an error in this content cause harm - financial, reputational, security, or regulatory? If yes, manual verification is mandatory.
  3. Am I relying on AI because it is efficient, or because I lack the expertise to do this myself? The latter is not an acceptable use case.
  4. Would I be comfortable if this AI interaction were visible to my client? If not, reconsider the approach.

7.3 Perspectives we consider

We consider the following stakeholder perspectives when making decisions about AI use in our practice:

  • Clients: entitled to accurate, expert advice and transparency about how it was produced
  • Client employees: may be affected by security recommendations that AI assisted in shaping
  • Regulators: may require assurance that advice meets professional standards, not just adequate outputs
  • The profession: widespread low-quality AI use in consulting risks eroding trust in the field

8. Disclosure and Transparency

8.1 Our default position

We disclose AI involvement in our work when asked, and proactively in contexts where a reasonable client would consider it material. We do not misrepresent AI-assisted work as entirely hand-crafted.

8.2 Disclosure standards by context

ContextDisclosure approach
Client reports and formal deliverables Footer or methodology note: 'This document was prepared with AI drafting assistance and reviewed by a qualified consultant.'
Proposals and capability statements On request; available in our standard engagement terms
Published articles and LinkedIn content No mandatory disclosure for AI-assisted drafts that are substantively authored and edited by us - consistent with standard editorial practice
Engagement scope and MSA AI use policy referenced and available on request
Verbal advice in meetings No disclosure required - AI is not involved in real-time advisory conversations

8.3 When more detailed disclosure is appropriate

We provide more detailed disclosure - including tool used, scope of use, and review process - when:

  • A client expressly asks how a deliverable was produced
  • The engagement involves regulatory submissions, legal proceedings, or audit evidence
  • The client's own AI policy requires supplier disclosure
  • We assess that the nature of the content creates elevated risk if the production method is unclear

Attribution template - standard

"Sections of this document were drafted with the assistance of Claude (Anthropic). All content has been reviewed, edited, and is endorsed by Coastal Cyber. The advice and recommendations contained herein reflect our professional judgement."

Attribution template - detailed

"This document was produced using Claude (Anthropic) as a drafting tool. [Specific sections] were AI-assisted. All technical content, risk ratings, and recommendations were independently verified and approved by [name], [qualification]. No client confidential data was entered into AI systems during production."

9. Policy Review and Updates

This policy is reviewed:

  • Annually as a minimum
  • When we adopt a new AI tool or significantly change how we use an existing one
  • When relevant legislation, regulation, or professional standards change
  • When a material issue arises that this policy does not adequately address
Policy ownerDJ - Principal Consultant, Coastal Cyber
Current version1.0
Effective dateMarch 2026
Next scheduled reviewMarch 2027
Published locationcoastalcyber.com.au / available on request

10. Questions and Contact

Questions about this policy or our AI practices can be directed to:

Coastal Cyber Pty Ltd
hello@coastalcyber.com.au
coastalcyber.com.au
Sunshine Coast, Queensland, Australia