01 Purpose
This Web Content Filtering Policy establishes the requirements and controls governing access to internet-based content from Elexon Group's corporate ICT environment. It supports compliance with the Australian Signals Directorate (ASD) Essential Eight at Maturity Level 2 (ML2) and the Defence Industry Security Program (DISP) Entry Level membership requirements under DSPF Principle 16, Control 16.1.
This policy is intended to:
- Protect the organisation's systems, information, and users from web-based threats including malware, phishing, and malicious content;
- Enforce browser and web access controls consistent with ASD and vendor hardening guidance (ISM-1412, ISM-1485, ISM-1486);
- Establish acceptable and unacceptable categories of web content for business use;
- Define monitoring, enforcement, and exception management processes; and
- Demonstrate documented and auditable security controls to the Defence Industry Security Branch (DISB) during Entry Level Assessments, Ongoing Suitability Assessments, and Annual Security Reports.
02 Scope
This policy applies to:
- All employees, contractors, subcontractors, and third parties accessing Elexon Group's ICT systems;
- All corporate-owned or corporate-managed devices (workstations, laptops, mobile devices) used to correspond with or support Defence activities;
- All internet and web browsing activity conducted via Elexon Group's network infrastructure or VPN connections; and
- All web-facing systems within scope of DISP Entry Level membership, as defined in the organisation's DISP membership certificate and associated System Security Plan.
This policy does not apply to personal devices used solely for personal purposes and not connected to the corporate network, unless such devices are used to access corporate resources or Defence-related information.
03 Policy Authority and Ownership
| Role | Responsibility |
| Chief Security Officer (CSO) | Policy owner; accountable for approval, review, and compliance reporting to DISP. |
| Security Officer (SO) | Day-to-day management, exception handling, and evidence maintenance. |
| IT Manager / MSSP | Technical implementation, monitoring, and reporting of web content filtering controls. |
| All Staff | Compliance with this policy; reporting of suspected policy violations or security incidents. |
04 Regulatory and Framework Alignment
| Framework / Control | Reference | Relevance to This Policy |
| ASD Essential Eight ML2 | User Application Hardening | Browser hardening; blocking Java, ads, and malicious content |
| ASD ISM | ISM-1485, ISM-1486, ISM-1412 | Specific controls for web browser configuration and content filtering |
| DISP Entry Level | DSPF Principle 16, Control 16.1 | Mandatory cyber security requirements for DISP membership |
| ACSC Guidance | Strategies to Mitigate Cyber Security Incidents | Baseline mitigation strategies informing browser and web controls |
| Privacy Act 1988 (Cth) | Australian Privacy Principles | Governs monitoring and logging of user web activity |
05 Web Content Filtering Controls
The following controls must be implemented and maintained on all in-scope systems. These controls reflect the minimum requirements for ASD Essential Eight ML2 (User Application Hardening) and must be documented with supporting technical evidence for DISP assurance activities.
5.1 Browser Hardening Requirements
In accordance with ISM-1412, all web browsers deployed on corporate workstations must be hardened using ASD and vendor hardening guidance. Where ASD and vendor guidance conflict, the most restrictive requirement takes precedence.
Mandatory browser hardening controls include:
- All supported browsers (e.g., Microsoft Edge, Google Chrome) must be deployed in a managed, organisation-controlled configuration enforced via Group Policy or Mobile Device Management (MDM/Intune);
- Users must not be able to modify security-relevant browser settings, including disabling security warnings, adding certificate exceptions, or enabling blocked content types;
- Automatic browser updates must be enabled or managed to ensure security patches are applied within the patching timelines specified in the organisation's Patch Applications Policy;
- Legacy and unsupported browsers (including Internet Explorer 11) must be disabled or removed from all endpoints (ISM-1654);
- Browser extensions and plugins must be restricted to an organisation-approved list; unapproved extensions must be blocked via policy; and
- HTTPS connections must be enforced where available; use of unencrypted HTTP to sensitive or business-critical services must be blocked or warned.
5.2 Blocking of Java from the Internet
In accordance with ISM-1486, web browsers must not process Java content originating from the internet. This control applies to all browser-based Java plugins and applets. Specifically:
- Java plugins must be disabled or blocked in all managed browser configurations;
- Java Web Start (.jnlp) files from internet sources must be blocked at the web proxy or endpoint; and
- Where Java is required for legitimate business applications, it must be accessed via explicitly whitelisted, internal, or trusted sources only, with IT Manager approval and documented exception.
5.3 Blocking of Web Advertisements
In accordance with ISM-1485, web browsers must not process web advertisements from the internet. Advertising content represents a significant vector for malvertising attacks and drive-by downloads. Required controls include:
- An organisation-approved ad blocking solution must be deployed at the network (DNS or proxy) layer and/or browser layer;
- Ad-blocking controls must apply to all in-scope endpoints and must not be circumventable by end users;
- Third-party advertising networks and known malvertising domains must be blocked; and
- Blocked advertising requests must be logged for monitoring and audit purposes.
5.4 Web Content Category Filtering
A web content filtering solution (proxy, DNS filtering, or Secure Web Gateway) must be deployed to enforce category-based access controls. The following categories must be blocked by default for all users:
| Blocked Category | Rationale |
| Malware / Phishing / Command & Control | Direct threat to organisational security and Defence-related information |
| Hacking / Exploit Tools | Prohibited activity; significant security risk |
| Anonymisers / Proxy Bypass Tools | Circumvention of security controls; policy violation |
| Illegal / Criminal Activity | Legal obligation and DISP security requirement |
| Adult / Explicit Content | Inappropriate for workplace; potential legal liability |
| Gambling | Non-business use; potential legal liability |
| Peer-to-Peer / Torrenting | Data exfiltration risk; intellectual property concerns |
| Newly Registered / Uncategorised Domains | High-risk category frequently associated with malicious activity |
| Unauthorised Cloud Storage / File Sharing | Data exfiltration and information handling risk |
| Personal Email Services (where not approved) | Potential for data exfiltration and bypassing email controls |
The following categories are permitted by default, subject to business justification and this policy's acceptable use requirements:
- Business, finance, and professional services;
- Government and defence industry resources;
- News and media (reputable sources);
- Cloud-based productivity and collaboration tools approved by the organisation; and
- Software development and technical reference sites (for relevant roles).
5.5 HTTPS Inspection
Where technically feasible and proportionate to the organisation's risk profile, SSL/TLS inspection (HTTPS decryption) should be implemented on the web proxy or Secure Web Gateway to:
- Enable content inspection of encrypted web traffic for malware and data exfiltration;
- Enforce filtering controls against HTTPS-delivered content; and
- Log and audit encrypted traffic for security monitoring purposes.
Where HTTPS inspection is implemented, users must be notified via an acceptable use notice or privacy notice consistent with applicable obligations under the Privacy Act 1988 (Cth). Inspection must not apply to personal banking, health services, or other sensitive categories, and appropriate exclusions must be documented and approved by the CSO.
5.6 DNS-Layer Filtering
DNS-layer filtering must be implemented as a defence-in-depth control. All DNS queries from corporate endpoints must be routed through an organisation-controlled or approved DNS filtering service. This must:
- Block resolution of known malicious domains (malware C2, phishing, botnet infrastructure);
- Block resolution of domains matching blocked web content categories; and
- Log DNS query activity for security monitoring and incident response purposes.
Use of public or alternative DNS resolvers (e.g., 8.8.8.8) must be restricted on corporate endpoints, unless specifically approved and documented.
06 Acceptable Use of Internet Access
Internet access is provided for business purposes. Limited, reasonable personal use is permitted during lunch breaks and outside business hours, provided it does not:
- Compromise the security or performance of organisational systems;
- Involve access to blocked or inappropriate content categories;
- Result in the downloading, uploading, or sharing of Defence-related, sensitive, or proprietary information to personal or unauthorised cloud services; or
- Violate any applicable law or the organisation's Code of Conduct.
Users must not attempt to circumvent web content filtering controls, including through the use of VPNs, proxy services, Tor, or other anonymisation tools not approved by the organisation.
07 Monitoring, Logging, and Audit
To satisfy ASD Essential Eight ML2 requirements and support DISP assurance activities, the following monitoring and logging controls must be implemented:
- Web proxy, DNS filtering, and firewall logs must be retained for a minimum of 90 days (or as required by the organisation's Data Retention Policy or contractual obligations, whichever is greater);
- Logs must be protected against unauthorised modification or deletion;
- Blocked access attempts must be logged, including the user, device, timestamp, destination URL/domain, and category;
- Log data must be reviewed on at least a monthly basis by the SO or delegated security staff to identify anomalies, policy violations, or potential security incidents;
- Significant filtering events or patterns indicative of a security incident must be escalated in accordance with the organisation's Incident Response Plan; and
- Log review records must be maintained as evidence for DISP Annual Security Reports and audits.
Users are advised that web activity conducted on corporate systems and networks is monitored and logged. This monitoring is conducted for security, compliance, and operational purposes and is consistent with the organisation's privacy notice and applicable law.
08 Exception Management
Where a business requirement exists to access content or websites that would otherwise be blocked by this policy, an exception may be requested through the following process:
| Step | Action | Detail |
| 1 | Request Submission | User or manager submits a written exception request to the SO, detailing the business justification, specific URL/domain/category, required duration, and risk acknowledgement. |
| 2 | Risk Assessment | SO assesses the security risk of granting the exception, including threat intelligence checks on the requested domain. |
| 3 | Approval | CSO (or delegate) approves or rejects the exception. Exceptions involving Defence-related activities require CSO approval. |
| 4 | Implementation | IT Manager implements approved exceptions in the filtering system, scoped to the minimum required users, duration, and scope. |
| 5 | Recording | All exceptions must be recorded in the Web Content Filtering Exception Register, including approval details, duration, and periodic review dates. |
| 6 | Periodic Review | Exceptions are reviewed quarterly by the SO. Expired or unjustified exceptions must be revoked promptly. |
Emergency exceptions required to support an active security incident response may be implemented immediately by the IT Manager and must be documented within 24 hours of implementation.
09 Policy Violations and Incident Reporting
Violations of this policy — including attempts to circumvent web filtering controls — may result in disciplinary action in accordance with the organisation's Human Resources policies and Code of Conduct. Serious violations may be referred to relevant authorities.
Any user who becomes aware of a suspected security incident attributable to web-based content (including malware downloads, phishing attempts, or data exfiltration) must report it immediately to the SO in accordance with the organisation's Incident Response Plan. Incidents involving Defence-related information must also be reported to the ASD and to DISP as required under DSPF obligations.
10 Training and Awareness
All staff must receive security awareness training that covers:
- The purpose and operation of web content filtering;
- Acceptable and unacceptable internet use;
- Identification of phishing and malicious websites;
- Reporting obligations for suspected incidents; and
- Consequences of policy violations.
Training must be conducted upon induction and at least annually thereafter. Completion records must be maintained as evidence for DISP assurance activities.
11 Policy Review and Maintenance
This policy must be reviewed at least annually by the CSO and SO, and also reviewed following:
- A significant change to the organisation's ICT environment or DISP membership level;
- A security incident involving web-based threats;
- An update to the ASD Essential Eight Maturity Model or ASD ISM controls that affects the requirements of this policy; or
- A finding from a DISP Entry Level Assessment, Ongoing Suitability Assessment, Deep Dive Audit, or Annual Security Report that identifies gaps in web content filtering controls.
Review outcomes must be documented, and any required changes must be actioned within a timeframe agreed with the CSO. The policy version history must be maintained.