Skip to content

Technical Requirements

To deploy an application on Game Warden, your solution must satisfy the following architecture and security specifications before you engage the Game Warden onboarding team.


Architecture

Requirement Details
Containerization (OCI-compliant) The application must run in containers that conform to the Open Container Initiative (OCI) specification.
Kubernetes compatibility The application must be deployable on Kubernetes, using standard Kubernetes primitives (Deployments, StatefulSets, Services, ConfigMaps, etc.) and must not rely on host-level access or non-Kubernetes runtimes.
Database seeding Provide automated seed services or SQL/DDL scripts for the Game Warden team to execute.
At IL4 you will not have direct write access to production databases.
CPU architecture Workloads must target AMD64/x86_64 or ARM64/AArch64.

Security

Requirement Details
Meeting ATO vulnerability baseline Game Warden performs continuous security scans. All findings must be remediated in accordance with the Acceptance Baseline Criteria. Components must be patched regularly to maintain ATO compliance.
Continuous CVE remediation New CVEs discovered post-deployment must be resolved promptly by the application team.
DoW-approved authentication Applications must integrate with a DoW-approved identity provider—Game Warden SSO or Platform One SSO.
Credentialed access (IL4+) Personnel accessing IL4+ environments require a valid government access card credential obtained through standard DoW vetting.
Data classification limits Permitted data classifications: CUI, PII, IL2, IL4, IL5, ITAR.
Contact Game Warden before processing IL6, Special Access Programs (SAP), or Sensitive Compartmented Information (SCI) data.

AWS GPU support by environment

For a list of Amazon EC2 instance types available in AWS GovCloud (US-East), see the AWS documentation.

EC2 Instance Instance Name GPU Supported
g3 g3.4xlarge 1 NVIDIA Tesla M60 GPU, with 2048 parallel processing cores and 8 GiB of video memory
g3.8xlarge 2 NVIDIA Tesla M60 GPUs, each with 2048 parallel processing cores and 8 GiB of video memory
g3.16xlarge 4 NVIDIA Tesla M60 GPUs, each with 2048 parallel processing cores and 8 GiB of video memory
g4dn g4dn.xlarge 1 NVIDIA T4 Tensor Core GPU
g4dn.2xlarge 1 NVIDIA T4 Tensor Core GPU
g4dn.4xlarge 1 NVIDIA T4 Tensor Core GPU
g4dn.8xlarge 1 NVIDIA T4 Tensor Core GPU
g4dn.12xlarge 4 NVIDIA T4 Tensor Core GPUs
g4dn.16xlarge 1 NVIDIA T4 Tensor Core GPU
g4dn.metal 8 NVIDIA T4 Tensor Core GPUs
g5 g5.xlarge 1 NVIDIA A10G Tensor Core GPU
g5.2xlarge 1 NVIDIA A10G Tensor Core GPU
g5.4xlarge 1 NVIDIA A10G Tensor Core GPU
g5.8xlarge 1 NVIDIA A10G Tensor Core GPU
g5.16xlarge 1 NVIDIA A10G Tensor Core GPU
g5.12xlarge 4 NVIDIA A10G Tensor Core GPUs
g5.24xlarge 4 NVIDIA A10G Tensor Core GPUs
g5.48xlarge 8 NVIDIA A10G Tensor Core GPUs
p3 p3.2xlarge 1 NVIDIA Tesla V100 GPU, pairing 5,120 CUDA Cores and 640 Tensor Cores
p3.8xlarge 4 NVIDIA Tesla V100 GPUs, each pairing 5,120 CUDA Cores and 640 Tensor Cores
p3.16xlarge 8 NVIDIA Tesla V100 GPUs, each pairing 5,120 CUDA Cores and 640 Tensor Cores
p3dn p3dn.24xlarge 8 NVIDIA Tesla V100 GPUs
p5 p5.48xlarge 8 NVIDIA H100 Tensor Core GPUs
p5en.48xlarge 8 NVIDIA H100 Tensor Core GPUs

EC2 Instance Instance Name GPU Supported
g3 g3.4xlarge 1 NVIDIA Tesla M60 GPU, with 2048 parallel processing cores and 8 GiB of video memory
g3.8xlarge2 NVIDIA Tesla M60 GPUs, each with 2048 parallel processing cores and 8 GiB of video memory
g3.16xlarge4 NVIDIA Tesla M60 GPUs, each with 2048 parallel processing cores and 8 GiB of video memory
g4dn g4dn.xlarge 1 NVIDIA T4 Tensor Core GPU
g4dn.2xlarge1 NVIDIA T4 Tensor Core GPU
g4dn.4xlarge1 NVIDIA T4 Tensor Core GPU
g4dn.8xlarge1 NVIDIA T4 Tensor Core GPU
g4dn.12xlarge4 NVIDIA T4 Tensor Core GPUs
g4dn.16xlarge1 NVIDIA T4 Tensor Core GPU
g4dn.metal8 NVIDIA T4 Tensor Core GPUs
g5 g5.xlarge 1 NVIDIA A10G Tensor Core GPU
g5.2xlarge1 NVIDIA A10G Tensor Core GPU
g5.4xlarge1 NVIDIA A10G Tensor Core GPU
g5.8xlarge1 NVIDIA A10G Tensor Core GPU
g5.16xlarge1 NVIDIA A10G Tensor Core GPU
g5.12xlarge4 NVIDIA A10G Tensor Core GPUs
g5.24xlarge4 NVIDIA A10G Tensor Core GPUs
g5.48xlarge8 NVIDIA A10G Tensor Core GPUs
g6 g6.xlarge 1 NVIDIA L4 Tensor Core GPU
g6.2xlarge1 NVIDIA L4 Tensor Core GPU
g6.4xlarge1 NVIDIA L4 Tensor Core GPU
g6.8xlarge1 NVIDIA L4 Tensor Core GPU
g6.16xlarge1 NVIDIA L4 Tensor Core GPU
g6.12xlarge4 NVIDIA L4 Tensor Core GPUs
g6.24xlarge4 NVIDIA L4 Tensor Core GPUs
g6.48xlarge8 NVIDIA L4 Tensor Core GPUs
p3 p3.2xlarge 1 NVIDIA Tesla V100 GPU, pairing 5,120 CUDA Cores and 640 Tensor Cores
p3.8xlarge4 NVIDIA Tesla V100 GPUs, each pairing 5,120 CUDA Cores and 640 Tensor Cores
p3.16xlarge8 NVIDIA Tesla V100 GPUs, each pairing 5,120 CUDA Cores and 640 Tensor Cores
p3dn p3dn.24xlarge 8 NVIDIA Tesla V100 GPUs
p4d p4d.24xlarge 8 NVIDIA A100 Tensor Core GPUs
p5 p5.48xlarge 8 NVIDIA H100 Tensor Core GPUs


GCP GPU support

Below are the GPUs supported in the us-east4 region (Northern Virginia) for Assured Workloads:

GPU Model Machine Series Typical Use Case
NVIDIA H100 (80GB) A3 Large-scale AI training and LLM fine-tuning.
NVIDIA A100 (40GB/80GB) A2 High-performance deep learning and data science.
NVIDIA L4 G2 AI inference, video processing, and smaller training tasks.
NVIDIA T4 N1 Cost-effective inference and graphics acceleration.
NVIDIA RTX PRO 6000* G4 High-end workstations and Blackwell architecture tasks.

Azure GPU support

Below are the GPUs supported in Azure Government Cloud (US Gov Virginia):

Series GPU Model Primary Use Case
ND H100 v5 NVIDIA H100 High-End AI: LLM training and massive Generative AI.
ND A100 v4 NVIDIA A100 (80GB) Deep Learning: Large-scale training and high-memory HPC.
NCads H100 v5 NVIDIA H100 Inference/Training: Focused on PCIe H100 performance.
NCas T4 v3 NVIDIA T4 Inference: Lightweight AI, video encoding, and data processing.
NVads A10 v5 NVIDIA A10 Visualization: Graphics-heavy apps, CAD, and VDI.
NCv3 / NCv2 NVIDIA V100 / P100 Legacy Compute: Older HPC workloads (often being phased out).

Programming language

Game Warden does not impose restrictions on your choice of programming language or framework. You can deploy your application in any language, as long as it is packaged as a Linux-based container. Note that Windows-based containers are not supported.


Next steps

Confirm your architecture and security posture meet the requirements above, then contact the Game Warden team. We’ll help ensure alignment and support your application launch. Reach out to the Growth team for details.