guide
Evidence Guide: Data Protection Assessment
Share this doc:
Depending on factors like what types of Platform Data you collect, Meta may ask about your security practices when you complete the Data Protection Assessment (DPA). Certain questions will also require you to upload supporting evidence.
Meta will request procedure or policy evidence, implementation evidence, or both. Evidence-related issues are some of the most common challenges developers face when completing the DPA. Those issues often arise because developers have submitted incomplete or insufficient evidence.
As you prepare for the DPA, use this guide to help you submit evidence that will meet Meta's requirements. Meta may ask about the following security practices:
Keep in mind: Any evidence that does not currently exist will require time and resources to create or compile. Review these requirements early so you can prepare and submit acceptable evidence before your DPA is due.
Protecting Platform Data with encryption at rest
Meta's requirement:
You are required to protect the Platform Data you store in a cloud, server, or data center environment with encryption at rest, or with an acceptable alternative.
How do I meet this requirement?
To meet Meta's requirement for encrypting data at rest, you must:
- Enable either application-level (e.g., software encrypts/decrypts specific columns in a database) or full-disk encryption
- Apply encryption at rest comprehensively in the cloud/server environment, including both primary storage and backups
Meta recommends that you use industry-standard encryption (e.g., AES, BitLocker, Blowfish, TDES, RSA), but does not require any particular algorithm or key length.
What should I do to prepare evidence?
This question requires both policy/procedure and implementation evidence. Your evidence must clearly demonstrate that you require encryption at rest for all Meta Platform data. It cannot be optional.
Examples of acceptable evidence include:
- Screenshot of a sample of AWS or cloud provider service that demonstrates encryption at rest is enabled
- Screenshot of database table showing the Platform Data fields are encrypted
- Unix command output showing you have enabled disk encryption
- Statement that you use a platform service that is entirely encrypted at rest, such as Google Cloud
- Statement that you don't store any data in your cloud or server environment
Examples of acceptable evidence:
AWS S3 buckets may be configured to apply encryption at rest to all objects created within the bucket. Use these commands to list buckets and fetch the configuration for default bucket encryption.
AWS RDS - encryption at rest is configurable in AWS RDS, so developers must make sure that the configuration option is set to apply this protection.
For a representative RDS instance that contains Platform Data, use the AWS CLI tool to fetch its StorageEncrypted configuration.
Examples of unacceptable evidence include:
- A statement that you use a service for which encryption at rest is optional
- A statement that you only encrypt backups
What if I don't encrypt data at rest?
If you do not encrypt Platform Data at rest, you must prove that you are still protecting Platform Data in a way that meets Meta's standards. To do so, provide one of the following:
- ISO 27001:2013 Statement of Applicability: Control A.11 for Physical & Environmental Security and Control A.8.3 for secure media handling
- SOC 2 Type 2 Workbook: Control CC6.4 for physical security and CC6.5 for secure media handling
The ISO or SOC report you provide needs to be relevant to the organization that's handling the physical hosting environment that you use to store Meta Platform data in your backend.
Protecting Platform Data on devices and removable storage
Meta's requirement:
If you allow employees to access and store Platform Data on organizational devices (e.g., company-issued laptops), personal devices, or removable storage (e.g., USBs), you are required to take steps to limit the risk of that Platform Data being lost or stolen.
How do I meet this requirement?
There are two acceptable paths for this requirement, including:
You are storing Platform Data on organization devices, AND applying controls to protect it.
In this scenario, you must demonstrate that you have administrative controls in place to protect Platform Data on devices (organizational and personal) and removable storage. Your policy documentation must clearly state what employees are prohibited from doing with Platform Data. If you also enforce technical controls, provide evidence of those as well.
You prevent storage of Platform Data on organization devices, OR Meta Platform Data is never accessible to any member of the organization.
In this scenario, you must demonstrate that you have administrative controls in place to prevent storage of Platform Data on organization devices or prevent members of the organization from accessing Meta Platform Data.
What should I do to prepare evidence?
If you allow storage of Platform Data on these devices, you must have EITHER administrative controls (i.e., rules/policies) OR technical controls (e.g., disk encryption) relevant to Platform Data stored on organizational devices (e.g., laptops) and removable media.
Administrative Controls
If you do allow storage of Platform Data on organizational devices and removable media, the policy and procedure evidence you provide must include a document that states BOTH of the following:
- Storage of Platform Data is only allowed when there is a clear and actionable business purpose, AND
- Platform Data is deleted from organizational devices and removable media when the business reason for storing that data no longer exists.
Examples of acceptable evidence include:
- Documented policies outlining what employees are required to do, and prohibited from doing, with managed devices that access Platform Data
- A documented mechanism that facilitates organizational awareness of data handling guidelines for data generally, and Platform Data specifically, such as: a contractual agreement as a condition of employment, email reminders, and/or annual training
The documents you provide must clearly demonstrate how you enforce the controls listed above.
Examples of acceptable evidence:
Technical Controls
If you do allow storage of Platform Data on organizational devices and removable media, acceptable policy/procedure evidence of technical controls might demonstrate that you do ONE of the following:
- Block unmanaged devices from connecting to sensitive services (e.g., the production network)
- Enforce full-disk encryption on managed devices (e.g., BitLocker for Windows, FileVault for Mac) where Platform Data is stored
- Block removable storage (e.g., USB drives) from being connected to managed devices
- Use Data Loss Prevention (DLP) technology on managed devices to block improper handling of Platform Data (e.g., sending the data in an email attachment)
Acceptable implementation evidence of technical controls would include a screenshot that demonstrates you have implemented an acceptable control, such as full disk encryption or Data Loss Prevention software, for all devices storing Platform Data.
The evidence you submit must demonstrate that you enforce an acceptable control at the organizational level. A screenshot of the control implemented on a single device will NOT be accepted.
We do not allow storage of Meta Platform Data
If you do not allow storage of Platform Data on these devices, you must have administrative controls relevant to Platform Data on organizational devices (e.g., laptops) and removable media. You must provide BOTH of the following:
- Policy or procedure evidence: A written document that clearly states that storage of Meta Platform Data on organizational devices (laptops, tablets, etc.) and removable media (USB devices, phones, etc.) is forbidden.
Please highlight or circle the clause in your policy that relates to this control. If you have data classification and data handling policies to determine controls on Platform Data storage, please indicate how you have classified Meta Platform Data.
- Implementation Evidence: A written statement that confirms you forbid the storage of Platform Data. That statement should be similar to the following: “All employees within my organization, who are subject to policy forbidding processing Platform Data on organizational and personal devices have been informed of that policy and acknowledged their understanding. All new employees are being informed of that policy as part of their onboarding.”
Protecting Platform Data with encryption in transit
Meta's requirement:
You are required to enable TLS 1.2 encryption or greater for all public network connections where Platform Data is transmitted.
This requirement applies to all developers, whether or not you process Platform Data server-side.
How do I meet this requirement?
For all data transmitted over public networks, you must enable TLS 1.2 or greater. You must never allow Platform Data to be transmitted over public networks in unencrypted form (HTTP, FTP) and you must disable older, insecure encryption methods whenever possible.
What should I do to prepare evidence?
This question requires both policy/procedure and implementation evidence. Even if you enable TLS 1.2 or greater, you will not meet this requirement if your evidence indicates that you support older, insecure encryption techniques.
If required for compatibility, TLS 1.1 and 1.0 are acceptable. Make sure you note that in your evidence if that applies to you.
Examples of acceptable evidence include:
- SSL Labs test result for each web domain that you return Platform Data to app clients, and TLS 1.2 or greater
- Screenshot of load balancer configuration(s) that demonstrate TLS 1.2 or greater is enabled
- For any third-party server you transmit data to, an explanation of how that third party configures their software client to only allow connections with TLS 1.2 or greater (e.g., a source code configuration for an HTTP client)
Examples of acceptable evidence:
Examples of unacceptable evidence include statements such as:
- “We support SSLv2, SSLv3, TLS 1.0, TLS 1.1, and TLS 1.2.” Older, insecure encryption methods like SSLv2 and v3 are forbidden; even if you enable TLS 1.2 or greater, allowing these encryption methods will result in a fail
- “All transmissions are via https.” You must specify the allowed and disallowed TLS versions
What if I don't encrypt data using TLS 1.2 or greater?
You may be protecting Platform Data in transit using a different type of encryption besides TLS that is acceptable if the approach provides equivalent protection. If that's the case, submit the following details about the encryption you use:
- Whether you enable symmetric or asymmetric encryption
- Which encryption algorithm you use (e.g., AES, BitLocker, TDES, RSA)
- The minimum key length you enforce
Testing your app and systems for security vulnerabilities
Meta's requirement:
You are required to test for vulnerabilities and security issues so that you can find them proactively, ideally preventing security incidents before they happen.
How do I meet this requirement?
All developers must have tested the software that processes Platform Data for vulnerabilities by conducting either a penetration test of the app/system, or a vulnerability scan/static analysis of the software. Additionally:
- The test must have been conducted within the past 12 months
- The output from the most recent test must show that there are no unresolved high-severity or critical vulnerabilities
If you are using a cloud hosting provider, regardless of the hosting approach, you must have also tested the cloud configuration for security issues.
What should I do to prepare evidence?
This question requires both policy/procedure and implementation evidence. Your policy evidence must demonstrate the information Meta requires, including: how frequently you test, and a description of the tests you run.
Your implementation evidence should show that you have completed a penetration test or static application security test (SAST) tool execution. It must include:
- A description of the scope of the test
- The date that it was completed, which should be within the last 12 months
- A summary or list of vulnerabilities found during the test, including severity categorization (e.g., critical, high, medium, low, informational); Meta expects there to be no unresolved high severity or critical vulnerabilities
If any critical and high severity vulnerabilities identified pose no risk of unauthorized access to Meta Platform Data, make note of this in your evidence.
- If you are using a cloud hosting provider (e.g., AWS, GCP, Azure, etc.):
- You must also submit evidence that you have undertaken a cloud configuration review such as NCC Scout Suite or AWS Config
- Ensure your report includes the important information that Meta requires: Date, findings, severity categorization, and no unresolved high severity or critical vulnerabilities
- If this is not applicable to your organization, include a document in your submission that explains why a cloud configuration review is not applicable
Examples of acceptable evidence:
Examples of unacceptable evidence include statements such as:
- “We don't test, but we don't have any vulnerabilities in our software.” You must provide evidence that you are proactively looking for vulnerabilities at least every 12 months
- “We performed a test, but we have unresolved critical-level vulnerabilities.” To be fully compliant, you must resolve all unremediated critical- and high-severity vulnerabilities or provide a detailed remediation plan outlining how you will resolve them
Remember to remove or redact sensitive information from the evidence, including detailed vulnerability reproduction steps, before submitting.
Protecting the Meta app secret and access tokens
Meta's requirement:
You must protect Meta's app secret and access tokens by:
- Never storing Meta API access tokens on client devices where they are accessible outside of the current app and user, and
- Using a data vault or secrets manager with separate key management service if access tokens and app secrets are stored in a cloud, server, or data center environment.
How do I meet this requirement?
You must protect the Meta API access tokens and Meta's app secret against impersonation and unauthorized access. Specifically, you must do the following for each:
Access Tokens
- If you store Meta access tokens on your devices, the tokens must be written in a way that prevents another user or process from reading them
- If you store Meta access tokens server-side, those tokens must:
a. Be protected using a data vault or secrets manager with a separate key management service; only the application can have access to the decryption key
b. Not be written to log files
App Secret
One of the following must be true:
- You never expose the app secret outside of a secured server environment, e.g., it is never returned by a network call to a browser or mobile app, and the secret is not embedded into code that's distributed to mobile or native/desktop clients
- You must have configured your app with type Native/Desktop so that Meta APIs will no longer trust API calls that include the app secret
or
What should I do to prepare evidence?
This question requires both policy/procedure and implementation evidence. You must include:
- Documentation about the policy for protecting Meta API access tokens and the app secret
- An attestation that you do not not persist any Facebook access tokens (i.e., that you only store access tokens in transient memory)
- If the app processes Meta access tokens server-side, include evidence that demonstrates the protections you take; e.g., submit implementation evidence to demonstrate that you: use a data vault, store the values in an encrypted format, and/or configure your app to require app secret proofs
- E.g.: include evidence of tools in place, such as: Vault by Hashicorp, GCP Firebase, Ansible
Examples of acceptable evidence:
Examples of unacceptable evidence include statements such as the following, neither of which provide sufficient protection:
- “We use environment variables.”
- “We hardcode these values into our source files in our code repository.”
Remember to remove or redact sensitive information from the evidence, including detailed vulnerability reproduction steps, before submitting.
What if I don't use a data vault or app-level encryption?
If you do not protect access tokens stored server-side with a data vault or via app-level encryption, you may:
- Protect the app secret using a vault or application encryption where the key is only accessible to the app, and configure the app to require app secret proof for all API calls to Meta
- If that option is not viable (i.e., if you cannot require app secret proof because it would block certain necessary API calls), then Meta will consider other controls that you have in place to limit the risk of unauthorized use of the access tokens compared to the risk of misuse of stored access tokens; provide as much detail as possible about the alternative protections you enforce
Having an incident response plan, and testing your incident response systems and processes
Meta's requirement:
You must have a plan to respond to security incidents (e.g., a data breach or cyberattack). Plans will vary by organization, but your response plan must detail who is responsible for specific tasks in order to contain the incident, communicate with stakeholders, recover, and learn from what happened.
This requirement applies to all developers, whether or not you process Platform Data server-side.
How do I meet this requirement?
The evidence you submit must demonstrate that your plan:
- Meets Meta's minimum requirements by detailing roles and responsibilities, detection, reaction and recovery, and post-incident review
- Has been tested within the past 12 months
What should I do to prepare evidence?
To meet Meta's requirements, submit the following:
A copy of your incident response plan. It should be one or more documents containing the following information:
- Roles and responsibilities
Include description of each role, associated responsibilities, and redacted names of individuals assigned to each role - Detection
Detail your processes for detecting security incidents, including tools used, monitoring frequency, etc. - Reaction and recovery
Detail your post-detection response (e.g., assembling team, investigation), remediation, communication (both internal and external), and recovery plan - Post-incident review
Conduct a meeting (after-action review, post-mortem, etc.) to discuss what was learned, what will change, and to communicate status and resultsIf you do not have an incident response plan, there are many resources and templates available to help you get started. Check out Counteractive Security's Incident Response Plan template.
- Roles and responsibilities
Proof that you have tested the plan within the past 12 months. The form of proof can vary, but should include:
- A description of the scenario you used to test the plan (e.g., a tabletop exercise in response to a ransomware attack)
- The date of the most recent test
- The role of each participant
- An explanation for why any person named in the plan's roles and responsibilities did not participate in the test, if applicable
Examples of acceptable evidence:
Unacceptable evidence would include statements that indicate:
- You do not proactively test systems and processes
- You have systems and processes in place, but no way to test them
- It's been too long since you've tested your systems and processes (i.e., more than 12 months)
Remember to remove or redact sensitive information from the evidence, including detailed vulnerability reproduction steps, before submitting.
Requiring multi-factor authentication
Meta's requirement:
You must require multi-factor authentication (MFA) for access to every account that can connect to the cloud or service environment and/or tools and services you use to deploy, maintain, monitor, and operate the systems where you store Platform Data from Meta.
How do I meet this requirement?
MFA is required for the following tools and systems:
- Collaboration/communications tools, e.g., business email or Slack
- Code repository, e.g., GitHub or another tool used to track changes to the app/system's code or configuration
If you process Platform Data server-side:
- Software deployment tools used to deploy code into the cloud/server environment, e.g., Jenkins or another Continuous Integration/Continuous Deployment (CI/CD) tool
- Administrative tools such as a portal or other access used to manage/monitor the cloud or server environment
- Access to servers, including SSH, remote desktop, or similar tools used to remotely log in to servers running server-side
While you can use any MFA implementation, Meta recommends the use of an authenticator app or hardware (e.g., YubiKey) over codes sent by SMS.
What should I do to prepare evidence?
This question requires both policy/procedure and implementation evidence. Your evidence must show that MFA is:
- Enforced on the tools applicable to the environments outlined in the acceptance criteria (i.e., collaboration tools, code repository, cloud/server deployment, cloud/server administrative portal)
- Enabled globally, i.e., for all users; do not just provide an example of a single account/device with MFA enabled. Show a configuration or rule set that demonstrates how you enforce MFA company-wide
Examples of acceptable evidence:
Unacceptable evidence would include statements that indicate that MFA is optional. For example:
- A statement indicating that employees can turn on MFA if they want
- A screenshot that indicates MFA is not required by your organization
What if I don't enforce MFA?
MFA is recommended and preferred. But if you do not enforce MFA, one or more of the following alternative approaches may be sufficient to prevent account takeovers. Describe in detail whether your organization enforces:
Strong password requirements
Minimum password complexity, prohibiting dictionary words, and prohibiting passwords that are known to have been previously breached
Authentication backoff
Use of a tool that introduces increasingly long waiting periods in-between failed login attempts from the same source
Automatic lockouts
A mechanism to automatically block login to an account after a specific amount of failed login attempts (e.g., 10 attempts)
Having a system for maintaining user accounts
Meta's requirement:
You are required to have a system for maintaining user accounts (assigning, revoking, and reviewing access and privileges).
This requirement applies to all developers, whether or not you process Platform Data server-side.
How do I meet this requirement?
Your account maintenance processes and policies must prevent unauthorized use of accounts to gain access to Platform Data. A key component of this requirement is ensuring that access to resources and systems is revoked when it's no longer needed (e.g., an employee leaves the organization). To meet Meta's requirement, you must do all of the following:
Have a tool or process for managing accounts for tools, systems, or apps used to:
- Communicate (e.g., business email or Slack)
- Ship software (e.g., code repository)
- Administer and operate any system that processes Platform Data
- Regularly review access grants
- Have a process in place for revoking access when it's no longer required AND no longer being used
- Have a process to promptly revoke access to these tools when an employee leaves the organization
Meta does NOT require you to use any particular tool, or that you use a single consolidated tool to manage accounts across various access types.
What should I do to prepare evidence?
This question requires both policy/procedure and implementation evidence.
- Your policy evidence should explain your account management practices in detail. You must demonstrate that you regularly review accounts and revoke access for employees that leave the company. Ideally, your documentation will include:
- Procedures for creating accounts and granting permissions
- Security requirements (e.g., minimum password complexity, account lockout policy, multi-factor authentication (MFA) policy, account reset procedures)
- Process for revoking access after a period of inactivity, and when an employee leaves the organization
- You must also provide implementation evidence from at least one tool or process* in place to manage accounts that shows:
- Employees that have departed the organization and had their access to these tools revoked, e.g., upload a reconciliation report comparing user accounts to the authoritative data of current employees
- Note: You are not required to provide a screenshot from a tool; a spreadsheet is acceptable if that is the process your organization uses to manage accounts
- Access that is not used for some time is revoked, e.g., upload a report that shows the last access date of an active account holder is within the past 90 days (if the max inactivity period is three months)
- Employees that have departed the organization and had their access to these tools revoked, e.g., upload a reconciliation report comparing user accounts to the authoritative data of current employees
* This evidence should apply to one or more of the following:
- Business email and collaboration tools
- Code repository
- Cloud/server deployment tools
- Cloud/server administrative portal
- Cloud/server remote login (e.g., SSH or remote desktop)
Remember to remove or redact sensitive information from the evidence, including detailed vulnerability reproduction steps, before submitting.
Examples of acceptable evidence:
Keeping software up-to-date
Meta's requirement:
You are required to keep software components up-to-date and patched to resolve any known security vulnerabilities. That includes libraries, APIs, virtual machine images, browsers, and more.
This requirement applies to all developers, regardless of your hosting approach (e.g., BaaS, PaaS, IaaS, self-hosted, or hybrid).
How do I meet this requirement?
You must have a defined, repeatable way of identifying available patches that resolve security vulnerabilities, prioritizing based on risk, and applying patches as an ongoing activity. The output of the identified vulnerability patches must show that there are no unresolved critical or high severity vulnerabilities. If any critical and high severity vulnerabilities identified pose no risk of unauthorized access to Meta Platform Data, please denote. This requirement applies to any of the following software components in place at your organization:
- Libraries, SDKs, packages, app containers, and operating systems used in a cloud or server environment
- Libraries, SDKs, and packages used on client devices, e.g., within mobile apps
- Operating systems and applications used by members to build and operate the app/system, e.g., operating systems and browsers running on employee laptop
Meta does NOT require you to use any particular tool for these activities. It is most likely that your organization uses different approaches for keeping different software up-to-date.
What should I do to prepare evidence?
This question requires both policy/procedure and implementation evidence. Start by identifying the types of software in your environment that are in scope for the question. Include any libraries, SDKs, packages, virtual machine images, app containers, browsers, operating systems, and other applications used by employees and contributors.
Using a document or screenshot, upload evidence that demonstrates you do each of the following activities:
Implementation Evidence
You must provide a report of identified vulnerability patches via whichever tool you use to prove they keep your software up-to-date. The output must show that there are no unresolved critical or high severity vulnerabilities.
If any critical and high severity vulnerabilities identified pose no risk of unauthorized access to Meta Platform Data, make note of this in your evidence.
Policy or Procedure Evidence
Using a document or screenshot, upload evidence that demonstrates you do each of the following activities:
- Inventory in-scope software components
Provide a list of in-scope libraries, packages, SDKs, containers, app servers, and operating systems (including on employee devices) that need to be patched on an ongoing basis
- Identify available software patches
Show that you have a tool or process in place for identifying security patches for each of the items in your inventory
- Prioritize
Demonstrate the tool or process you use to assign priority to software patches (e.g., Jira tickets, GitHub issues, tracking spreadsheet)
- Patch
Demonstrate that after relevant patches have been identified and prioritized, they are rolled out into the various software components
Examples of acceptable evidence:
Logging access to Platform Data and tracing where Platform Data was sent and stored
Meta's requirement:
If you process Platform Data server-side, you must maintain tamper-proof audit logs. You must collect audit logs for both system administrator events and application events.
How do I meet this requirement?
Within the environment that you store Platform Data, you must:
- Maintain audit logs that record key events (e.g., access to Platform Data, use of accounts with elevated permissions, changes to the audit log configuration)
- Consolidate the logs into a central, tamper-proof repository (i.e., the logs should be protected against alteration or deletion)
What should I do to prepare evidence?
Meta does not require you to submit evidence in support of your answer.
Monitoring transfers of Platform Data and key points where it can leave the system
Meta's requirement:
To ensure Platform Data is only used for its intended purposes, you are required to 1) have an understanding of how Platform Data is expected to be processed, and 2) monitor the actual processing of that data.
How do I meet this requirement?
If you process Platform Data server-side, then within that environment, you should:
- Keep an accurate data flow diagram that shows where Platform Data is stored, processed, and transmitted across networks
- Configure monitoring for transfers of Platform Data outside of the system (e.g., audit logs with an automated monitoring product)
If possible, configure the monitoring system to raise alerts that are reviewed promptly in case Platform Data is transferred unexpectedly.
What should I do to prepare evidence?
Meta does not require you to submit evidence in support of your answer.
Having an automated system for monitoring logs and security events, and for generating alerts
Meta's requirement:
You are required to have a tool in place that ingests log files and other signals to trigger your team to investigate potential security-related events.
How do I meet this requirement?
If you process Platform Data server-side, you should:
- Have a tool that can ingest log files and other events, establish rules that trigger alerts, and have a mechanism to notify your team (e.g., on-call InfoSec investigator)
- Actively ingest relevant signals into the tool (e.g., web access logs, authentication attempts, actions taken by users with elevated privileges)
- Continuously refine the rules to find an ideal balance (i.e., avoiding false positives while paying attention to events that warrant investigation)
What should I do to prepare evidence?
Meta does not require you to submit evidence in support of your answer.