Welcome, brave voyager, to the wild west of modern internal developer platforms! You've decided to tackle a big, gnarly problem, getting out of ‘bookmark hell’.
Backstage is a powerful tool for taming complexity and creating a single, cohesive developer ecosystem. But what exactly is it and what problems does it solve?
Let's dive in.

Checkout the offical demo here: https://demo.backstage.io/
What It Is and Why You Need It
Think of Backstage as a one-stop-shop for all your developers. It's an Internal Developer Platform (IDP), but in plain English, it's a website built for your engineers. It provides a centralised, single-pane-of-glass view of everything:
- Your codebase: Who owns what? What's the tech stack?
- Your services: Is the API running? How is it performing?
- Your infrastructure: What cluster is this service deployed to?
- Your company's knowledge: A centralised place for documentation, how-to guides, and runbooks.
- Your automated processes: Best of all, A simple UI to spin up a new microservice, create a new cloud resource, or onboard a new hires, offloading or speeding up work from your platform team.
Backstage is an open-source project created at Spotify and now a Cloud Native Computing Foundation (CNCF) project. It's designed to be extensible, meaning you can plug in any tool, service, or workflow your team uses, from GitHub and Jira to AWS and Azure.
Pros and Cons
Pros:
- Offloads repetitive tasks from platform engineers: Your senior engineers will thank you for giving them back their time. Backstage automates the "grunt work" like creating new repos or setting up CI/CD pipelines.
- Batteries included: Backstage comes with core components like a software catalog, documentation system, UI components (MaterialUI) and software templates.
- Enforces standards and best practices: It's easier to ensure every new service adheres to company-wide standards when the "easy button" to create one is a Backstage template that already has everything configured.
Cons:
- Requires full-stack web development skills: This isn't just a configuration project. Backstage is a React/Node.js application. To create advanced, custom features, you need a team with frontend and backend development experience.
- Large upfront investment: Getting a full-fledged IDP off the ground takes time and dedicated resources. It's a long-term strategic investment, not a quick fix.
- ‘Just another standard’ risk: If not integrated well or holistically, it can lead to just another tool to maintain that doesn’t really solve it’s core problem.

Getting Started: Test On Your Local
The quickest way to get a feel for Backstage is to run it locally.
The code is a single repository, but it's split into two main parts: the frontend and the backend. This is a crucial concept to understand, as it influences everything from local development to your final deployment strategy
- Pull the code into your desired repo:
# Use NodeJS 20 or above! npx @backstage/create-app
- Install and run
yarn install yarn dev # Access on localhost:3000
Some key files to understand:
App.tsx: This is your main React component. It's the file where you configure what plugins and layouts your users see. If you want to add a new page or change the main navigation, this is where you do it.
app-config.yaml: This is the main configuration file for your Backstage instance. It defines everything from the application's title to connections for different plugins. Make sure you genuinely understand this one
app-config.production.yaml: A key concept to grasp is configuration overrides. This file is loaded in a production environment and will override any settings inapp-config.yaml. If something is used for local and prod, put it inapp-config.yaml
Deploying Backstage CI/CD
This is where the magic happens. A robust CI/CD pipeline is the backbone of a reliable IDP.
Backstage is fairly lightweight and if you have a small team you could honestly just deploy it to a raspberry pi hanging off a router in the office, but in this guide we’ll focus on enterprise-level scalability, availability, and security using an EKS cluster.

Most of this guide is just can be found in more detail from here: https://aws.amazon.com/blogs/opensource/building-developer-portals-with-backstage-and-amazon-eks-blueprints/
We won’t spend too much time on EKS itself as quite often you’ll already have one up and running for your team, but if you’re just messing around you can skip these steps and just deploy Backstage to an EC2 or something easier (Skip to ‘Developing the Backstage App’ below if you wish)
Recommended Stack: EKS, Docker, GitHub Actions, and ArgoCD
- Dockerize Everything: Dockerize both the Backstage frontend and backend as separate images. While you could run them as a single image, separating them simplifies scaling and debugging.
# Frontend FROM node:20-bookworm as build WORKDIR /app # This will save you a bunch of headaches ENV NODE_OPTIONS="--no-node-snapshot" COPY package.json yarn.lock ./ RUN yarn install --frozen-lockfile COPY packages/ packages/ RUN yarn --cwd packages/app/ build FROM node:20-bookworm COPY --from=build /app/packages/app/build /usr/share/nginx/html EXPOSE 80 CMD ["nginx", "-g", "daemon off;"]
# Backend FROM node:20-bookworm WORKDIR /app # This will save you a bunch of headaches ENV NODE_OPTIONS="--no-node-snapshot" COPY package.json yarn.lock ./ # Install dependencies RUN yarn install --frozen-lockfile COPY packages/backend/ packages/backend/ COPY plugins/ plugins/ RUN yarn --cwd packages/backend/ build EXPOSE 7007 CMD ["node", "packages/backend", "--config", "app-config.yaml"]
- Build with GitHub Actions: Set up a GitHub Actions workflow to automatically build and push these Docker images to a container registry like AWS Elastic Container Registry (ECR) on git pushes. Use the Git SHA as the image tag to ensure immutability and easy rollbacks. This is assuming you have a github repo storing your Backstage code and you have an AWS ECR ready, here’s a guide for that: https://spacelift.io/blog/terraform-ecr
name: Build and Push Docker Images on: push: branches: - main jobs: build-and-push: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v2 - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: us-east-1 - name: Login to ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v1 - name: Build and push backend image env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} ECR_REPOSITORY: backstage-backend IMAGE_TAG: ${{ github.sha }} run: | docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG . docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG - name: Build and push frontend image env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} ECR_REPOSITORY: backstage-frontend IMAGE_TAG: ${{ github.sha }} run: | docker build -f Dockerfile.frontend -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG . docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
- GitOps with ArgoCD: This is the core of your deployment strategy. ArgoCD is a declarative GitOps tool that ensures the state of your Kubernetes cluster always matches the state defined in a Git repository.
First, you need to set up your EKS cluster with ArgoCD. For a great guide on how to do this, check out this article: https://aws.plainenglish.io/gitops-made-easy-automate-kubernetes-deployments-with-argo-cd-on-aws-eks-02dbd37b4a0b
Once you have your cluster and ArgoCD running, the workflow looks like this:
- Create a new GitOps repo: This repo will contain your Helm chart and application manifests. The "app-of-apps" pattern is a great way to manage multiple applications from a single manifest if you’re following the above guide
- Image Management: You need a way to tell your Helm chart which image to deploy.
- Manual commit: The easiest way is to add a step to your CI build pipeline that updates the
values.yamlfile in your GitOps repo with the new image tag and commits the change. - Argo Image Updater: For a more advanced and automated approach, you can use ArgoCD Image Updater. It continuously polls your container registry for new images and automatically updates the image tag in your GitOps repo, triggering a new deployment. This is the harder but better path for true GitOps automation. Learn more here: https://argocd-image-updater.readthedocs.io/en/stable/
- Add your app to ArgoCD: Use a manifest to tell ArgoCD where to find your application's Helm chart and what cluster to deploy it to
# Helm chart apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: backstage-{{ $name }} namespace: argocd spec: project: default source: repoURL: {{ $app.source.repoURL }} targetRevision: {{ $app.source.targetRevision }} path: {{ $app.source.path }} helm: values: | {{- $app.values | toYaml | nindent 8 }} destination: server: {{ $.Values.destination.server }} namespace: {{ $app.destination.namespace }} syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true {{- end }}
# values.yaml destination: server: 'https://kubernetes.default.svc' applications: backstage-backend: source: repoURL: 'https://github.com/your-org/backstage-app.git' # The original Backstage application repo targetRevision: main path: 'k8s/helm-chart/backstage-backend' # Path to a separate backend helm chart destination: namespace: backstage values: image: repository: 1234567890.dkr.ecr.us-east-1.amazonaws.com/backstage-backend # Your ECR registry tag: 'my-image-12345' # This is where the backend image tag is updated resources: requests: cpu: 100m memory: 256Mi limits: cpu: 250m memory: 512Mi env: - name: DATABASE_HOST value: 'backstage-db-host' # The hostname of your PostgreSQL DB you'll deploy later - name: DATABASE_PORT value: '5432' secrets: - name: GITHUB_TOKEN secretKey: github_token backstage-frontend: source: repoURL: 'https://github.com/your-org/backstage-app.git' targetRevision: main path: 'k8s/helm-chart/backstage-frontend' destination: namespace: backstage values: image: repository: 1234567890.dkr.ecr.us-east-1.amazonaws.com/backstage-frontend tag: 'my-image-4567' # This is where the frontend image tag is updated ingress: enabled: true host: backstage.your-company.com
- Progressive Delivery: For a truly robust system, set up progressive delivery with tools like Argo Rollouts. This allows for things like canary deployments, health-gated cutovers, and automated rollbacks, ensuring a working version of Backstage is always available.
- Pod Specs: Backstage's documentation recommends specific pod specifications for running in a Kubernetes environment. Follow these guidelines to ensure optimal performance and resource utilisation.
- Last Cloud resources: You'll need a few other pieces of infrastructure to make Backstage production-ready, I recommend doing all of this with Terraform IaC.
- PostgreSQL Database: While Backstage can run on an in-memory database locally, for any production use, you need a persistent, reliable database. A managed service like AWS RDS is a great option. Start with a really small instance since you can always scale up later.
- Load Balancer and Ingress: To expose your Backstage frontend and backend services to the internet, you need a load balancer. You have a few options:
- One for the cluster: A single load balancer for the entire cluster (e.g., an AWS Application Load Balancer) is cheaper but can be more complex to manage, especially if you have many different services.
- One per pod with Ingress: A better approach is to use an Ingress Controller (like the AWS Load Balancer Controller) that can automatically provision a new load balancer for each service or manage a single, shared load balancer with intelligent routing.
- DNS: You'll need a DNS record pointing to your load balancer so users can access Backstage via a human-readable URL (e.g.,
backstage.your-company.com). - Secret Management Kubernetes secrets are a "gotcha" waiting to happen. If a secret changes, you have to restart the pod for the change to take effect. This is a mess. The Fix: Use the External Secrets Operator (ESO). It's a Kubernetes operator that fetches secrets from a dedicated secrets manager (like AWS Secrets Manager) and injects them as environment variables or files directly into your pods. This means when a secret changes, EOSO automatically updates it in the pod, no redeployment or downtime required. Learn more here: https://external-secrets.io/latest/
- Authentication and User Management
- GitHub App for SSO: Backstage has a robust authentication system with many providers. If your organization uses GitHub Enterprise, setting up a GitHub App for SSO is a powerful way to authenticate users and scrape existing permissions (e.g., team memberships). This requires a GitHub Enterprise license.
- Alternative: If you're not on GitHub Enterprise, you can authenticate users in any way you like, with plugins for other identity providers or nothing at all if you swing that way
Backstage is an IDP, so knowing who is using it is critical, you’ll likely already have a github org setup with various permissions and accesses so let’s leverage that and SSO login
And that’s it! Well done if you’ve come this far, K8s is not easy. Next we’ll focus on actually building your backstage app itself to be useful for your teams.
Developing the Backstage App
Workflow Types: A Common Confusion
Backstage offers two powerful ways to automate ‘doing things’, each with its own pros and cons and unique vocabulary that I encourage you to understand before you develop anything.
- Software Templates (The "Fire-and-Forget" Approach)
- What they are: These are YAML-based templates that use the Scaffolder plugin to run a series of steps. They're ideal for "one-way" operations like creating a new GitHub repository, setting up a CI pipeline, or provisioning cloud infrastructure.
- How they work: A user fills out a form in the Backstage UI (Backstage provides all this for you). The Scaffolder takes that input and executes a YAML workflow. The workflow might call a GitHub Action, a Terraform script, or an external API. The user gets a simple success or failure message and can then move on.
- Best for: Tasks where the outcome is predictable, and you don't need real-time feedback or complex user interaction. Think of it as hitting "send" on a well-crafted email.
- Trap: It’s very quick to setup and you may feel encouraged to perform all sorts of tasks this way. It’s good if you need quick deployments but it’s hard to scale and manage after a certain point
- Custom Plugins (The "Interactive" Approach)
- What they are: These are full-fledged frontend and backend extensions. You can build a complete UI, handle complex state, and get real-time feedback from the user.
- How they work: You use the Backstage CLI to create new frontend and backend modules. The frontend plugin might display a wizard-like form, while the backend handles the complex business logic, routing and talks to various services. The user gets a dynamic, responsive experience.
- Best for: Complex, multi-step workflows or business operations that require a custom UI and more user interaction. For example, an onboarding process where you need to check multiple systems, fetch data throughout the process or a custom deployment dashboard.
- Trap: This will require web dev skills (React) which not all platform engineers will have or have to a level that they can confidently build and maintain such a system

Creating Your First Software Template
Software Templates are the "Easy Button" for your developers. They turn complex, multi-step provisioning tasks into a simple form fill. We'll build a template that takes a few inputs and instantly spins up a new, boilerplate GitHub repository.
Before we begin, ensure your Backstage application has the necessary scaffolding plugins enabled:
- Scaffolder Frontend: Already part of the default application.
- Scaffolder Backend: Handles the execution of the template steps.
- GitHub Integration: You need to configure Backstage to talk to GitHub.
Required Backend Configuration (
app-config.yaml)You need to tell Backstage how to authenticate with GitHub so it can create repos. You typically need a Personal Access Token (PAT) stored as an environment variable or, preferably, passed via your secrets manager (like ESO)
integrations: github: - host: github.com token: ${GITHUB_TOKEN} # Optional: set a separate PAT for the Scaffolder to use (must have repo/workflow scope) proxy: /scaffolder-github-action: target: https://api.github.com headers: Authorization: 'Bearer ${GITHUB_ACTIONS_TOKEN}'
Backstage templates are defined using YAML and use the Scaffolder's built-in actions.
Create a new directory for your templates (e.g.,
templates/new-repo).File:
templates/new-repo/template.yamlapiVersion: backstage.io/v1alpha1 kind: Template metadata: name: github-repo-starter title: GitHub Repository Starter description: Creates a new GitHub repository with a basic boilerplate. tags: - github - starter - repository spec: owner: user:guest type: service # Define the type of entity this template creates parameters: - title: Repository Details required: - repoName - repoDescription - owner properties: repoName: title: Repository Name type: string description: The name of the new repository (e.g., my-new-service). ui:autofocus: true repoDescription: title: Repository Description type: string description: A short description for the GitHub repository. owner: title: Repository Owner/Team type: string description: The GitHub owner or team that will manage this repo (e.g., engineering-team). ui:field: OwnerPicker # Backstage helper for picking registered teams/users steps: - id: create-repo name: Create GitHub Repository action: github:repo:create input: repoUrl: "github.com?repo={{ parameters.repoName }}&owner={{ parameters.owner }}" description: "{{ parameters.repoDescription }}" visibility: public # or private, internal - id: checkout-repo-content name: Fetch Skeleton Code action: fetch:template input: url: ./skeleton targetPath: ./tmp-repo - id: publish-to-github name: Push Initial Code action: publish:github input: repoUrl: "github.com?repo={{ parameters.repoName }}&owner={{ parameters.owner }}" sourcePath: ./tmp-repo branchName: main commitMessage: "Initial commit of generated repository structure" output: remoteUrl: '{{ steps.create-repo.output.remoteUrl }}' entityRef: 'template:default/{{ parameters.repoName }}'
And that’s it! No UI work required, you can now add this to the /catalog folder in Backstage itself or scan for it, here’s a guide to ways you can get it in Backstage: ‣

Creating Your First Custom Plugin (Frontend UI and Backend Endpoint)
A custom plugin is built from two main pieces:
- Frontend (React): The user interface (UI) where the user interacts.
- Backend (Node.js/Express): The API that handles communication with external systems (like GitHub's API) and business logic.
Generate the Plugin Structure, Use the Backstage CLI to scaffold a new plugin pair:
This creates two directories:
plugins/github-onboarder(Frontend)
plugins/github-onboarder-backend(Backend)
The backend's job is to talk to GitHub. We need an endpoint that accepts a username, checks if it exists, and possibly creates the user or triggers a relevant workflow.
File:
plugins/github-onboarder-backend/src/service/router.tsWe will create a simple endpoint
/check-user that simulates checking a user's existence against the GitHub API (using a simplified mock for clarity).import { Router } from 'express'; import { Logger } from 'winston'; import { Config } from '@backstage/config'; export interface RouterOptions { logger: Logger; config: Config; } export async function createRouter( options: RouterOptions, ): Promise<Router> { const { logger, config } = options; const router = Router(); router.use(require('cookie-parser')()); router.use(require('express-urlrewrite')('/github-onboarder/api')); router.use(require('body-parser').json()); the external GitHub API proxy const githubApiUrl = 'https://api.github.com'; router.get('/check-user', async (req, res) => { const { username } = req.query; logger.info(`Checking existence of GitHub user: ${username}`); if (username === 'existing-dev') { res.status(200).send({ exists: true, message: `User ${username} is already an active member.` }); } else if (username === 'new-hire') { res.status(200).send({ exists: false, message: `User ${username} not found. Please fill out the onboarding form.` }); } else { res.status(200).send({ exists: false, message: `User ${username} not found. Please fill out the onboarding form.` }); } }); router.post('/onboard', async (req, res) => { const { username, email, team } = req.body; logger.info(`Starting onboarding process for ${username} to team ${team}`); res.status(201).send({ success: true, message: `Onboarding process started for ${username}. A ticket has been created.` }); }); return router; } ```
Now, you need to load your new backend plugin into the overall Backstage backend.
# packages/backend/src/index.ts import { createRouter as createGithubOnboarderRouter } from '@internal/plugin-github-onboarder-backend'; async function main() { const logger = root.child({ scope: 'backend' }); const apiRouter = Router(); apiRouter.use('/github-onboarder', await createGithubOnboarderRouter({ logger, config })); } main().catch(error => { logger.error('Backend failed to start', error); process.exit(1); });
Finally, build and expose the frontend using React and Backstage's utility components to create a good user experience. We'll use the Backstage Proxy to safely communicate with our backend.
import React, { useState } from 'react'; import { Content, Header, Page, HeaderContent, Progress, Alert } from '@backstage/core-components'; import { useApi, configApiRef } from '@backstage/core-plugin-api'; import { Button, TextField, Box, Card, CardContent, Typography } from '@material-ui/core'; export const OnboarderComponent = () => { const [username, setUsername] = useState(''); const [email, setEmail] = useState(''); const [team, setTeam] = useState(''); const [status, setStatus] = useState<'idle' | 'checking' | 'onboarding'>('idle'); const [result, setResult] = useState<{ message: string; exists?: boolean; success?: boolean } | null>(null); const configApi = useApi(configApiRef); const baseUrl = configApi.getString('backend.baseUrl') + '/github-onboarder'; const handleCheckUser = async () => { setStatus('checking'); setResult(null); try { const response = await fetch(`${baseUrl}/check-user?username=${username}`); const data = await response.json(); setResult(data); } catch (error) { setResult({ message: 'Error checking user status.', exists: false }); } finally { setStatus('idle'); } }; const handleOnboardUser = async () => { setStatus('onboarding'); setResult(null); try { const response = await fetch(`${baseUrl}/onboard`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ username, email, team }), }); const data = await response.json(); setResult(data); } catch (error) { setResult({ success: false, message: 'Failed to initiate onboarding workflow.' }); } finally { setStatus('idle'); } }; const showForm = result?.exists === false; const isChecking = status === 'checking'; const isSubmitting = status === 'onboarding'; return ( <Page themeId="tool"> <Header title="GitHub Onboarder" subtitle="Quickly onboard new developers to GitHub"> <HeaderContent /> </Header> <Content> <Card> <CardContent> <Typography variant="h5" component="h2" gutterBottom> Check User Status </Typography> <Box mb={2}> <TextField label="GitHub Username" fullWidth value={username} onChange={(e) => setUsername(e.target.value)} disabled={isChecking} /> </Box> <Button variant="contained" color="primary" onClick={handleCheckUser} disabled={!username || isChecking} > {isChecking ? <Progress size={20} /> : 'Check User'} </Button> {result && ( <Box mt={3}> <Alert severity={result.exists ? 'success' : 'info'}> {result.message} </Alert> </Box> )} {showForm && ( <Box mt={4} p={3} style={{ border: '1px solid #ccc', borderRadius: '4px' }}> <Typography variant="h6" gutterBottom> Onboarding Request Form </Typography> <TextField label="Email Address" fullWidth margin="normal" value={email} onChange={(e) => setEmail(e.target.value)} /> <TextField label="Target Team (e.g., 'platform-engineering')" fullWidth margin="normal" value={team} onChange={(e) => setTeam(e.target.value)} /> <Button variant="contained" color="secondary" onClick={handleOnboardUser} disabled={isSubmitting || !email || !team} style={{ marginTop: '16px' }} > {isSubmitting ? <Progress size={20} /> : 'Submit Onboarding Request'} </Button> </Box> )} </CardContent> </Card> </Content> </Page> ); };
Finally finally, you need to add the plugin's page to the main navigation and routing of the Backstage frontend.
# packages/app/src/App.tsx import { githubOnboarderPlugin, GithubOnboarderPage } from '@internal/plugin-github-onboarder'; const routes = ( <FlatRoutes> <Route path="/github-onboarder" element={<GithubOnboarderPage />} /> </FlatRoutes> ); const navItem = ( <SidebarItem icon={AssignmentIndIcon} to="github-onboarder" text="GitHub Onboarder" /> ); # If you want the page added to the Navbar const sidebar = ( <Sidebar> {navItem} </Sidebar> );
This setup gives you a complete custom workflow: the user inputs data on the frontend, the frontend communicates with the dedicated backend API, and the backend performs the necessary external system checks and actions. This is the blueprint for creating any high-value, complex, business-logic-driven feature in Backstage.

From here I leave you with the next challenges such as logging strategy, permissions setup, deciding which third-party plugins suit your organisation and setting up the infamous ‘Automatic discovery’.
Good luck, Rockstar
