Inputs, Outputs, Artifacts, and Caches
Real pipelines are built from multiple jobs that must cooperate: one job builds the code, another tests it, a third deploys it. To make this work, jobs need ways to pass simple values to each other, share files produced during a run, and avoid repeating expensive work on every execution. GitHub Actions addresses all three needs through inputs and outputs, artifacts, and caches.
Working with Inputs and Outputs
Within a workflow you often need to pass configuration in from the outside, or forward a value computed in one step to a later step or job. GitHub Actions provides specific syntax for each direction of data flow.
Defining and Referencing Workflow Inputs
Workflow inputs are explicit values that a user or another workflow supplies at trigger time. They are not the same as context values or default environment variables — they must be declared in advance under a trigger's inputs: block.
Two triggers support inputs:
workflow_dispatch
Lets a developer start the workflow manually from the Actions tab and fill in values through a form. Useful for release workflows where a human must supply a version number or choose a deployment target.
workflow_call
Allows another workflow to invoke this one as a reusable sub-workflow and pass inputs programmatically. Covered in depth in the Reusable Workflows topic.
Regardless of which trigger fires, the inputs are accessed identically inside the workflow with ${{ inputs.<input-name> }}.
A developer opening the Actions tab and choosing to run this workflow will see a form with the two fields pre-filled with the default values. They can override either before clicking Run.
Always be careful with inputs that could be manipulated. Avoid interpolating${{ inputs.* }}directly insiderun:shell scripts, as a crafted value could inject shell commands. Assign the input to an environment variable first and reference the env variable inside the script.
Capturing Output from a Step
A step can share a value with later steps in the same job by writing to the special file referenced by the $GITHUB_OUTPUT environment variable. The format is name=value appended to that file.
The step must have an id: field so the output can be addressed by name. Other steps in the same job then read it through ${{ steps.<step-id>.outputs.<name> }}.
The first step writes BUILD_NUM to $GITHUB_OUTPUT. The second step reads it back through the steps context using the id gen-build-num.
GitHub Actions previously used aset-outputworkflow command (::set-output name=…::) to capture step outputs. That mechanism was deprecated because it was vulnerable to injection attacks. Always use$GITHUB_OUTPUTinstead.
Capturing Output from a Job
To make a value produced in one job available to a different job, you must do two things:
- Add an
outputs:block to the producing job that maps a key name to a step output expression. - In the consuming job, declare a dependency with
needs:and then reference the value through${{ needs.<job-id>.outputs.<key> }}.
The needs: keyword does double duty: it establishes execution order (validate runs only after prepare succeeds) and it gives the validate job access to prepare's outputs through the needs context.
When the output path expression becomes long and repetitive across multiple steps, it is cleaner to map it to a local environment variable:
Capturing Output from an Action Used in a Step
When a step uses a third-party or community action (via uses:), that action may declare its own outputs in its action.yml metadata file. You capture those outputs in exactly the same way as step outputs — by giving the step an id: and reading through steps.<id>.outputs.<name>.
The github-tag-action action exposes an output named next_tag. Because the step has id: semver, it is readable as steps.semver.outputs.next_tag — no extra configuration needed.
Artifacts
Artifacts, as Actions defines them, are files or collections of files created as the result of a job or workflow run and then persisted in GitHub. The most common reason to persist an artifact is so that it can be shared with other jobs in the same workflow — for example, a compiled module that needs to be tested or packaged by a downstream job. You can also access artifacts after the run has finished, either through the Actions UI or via the REST API.
Common uses for artifacts include:
- Compiled binaries or distribution packages produced by a build job
- Test result XML files and coverage reports
- Log files or diagnostic dumps collected when a job fails
- Static site output ready for deployment
Artifacts cannot be kept forever. By default, GitHub retains artifacts and build logs for 90 days on public repositories before automatically deleting them.
Artifact Retention Policy
If you have the necessary permissions on the repository, you can change the default retention period in the repository's Settings → Actions → General page, under the Artifact and log retention section.
| Repository type | Configurable range |
|---|---|
| Public | 1 – 90 days |
| Private | 1 – 400 days |
This repository-level setting only applies to new artifacts and log files created after the change — existing artifacts are not retroactively affected. Organizations and enterprises can also enforce maximum retention limits at their level, which individual repositories cannot override.
You can also override the retention period for a specific artifact at upload time using the retention-days parameter (described below). This is useful when a pipeline artifact is only needed for the duration of the workflow run and storing it longer would waste storage quota.
GitHub includes a certain amount of artifact storage at no cost depending on your plan. Storage costs accumulate over the full time the artifacts are retained — unlike compute minutes, which are charged per run. Keeping retention periods as short as practical helps manage storage spend.
Artifacts vs. GitHub Packages
GitHub has a separate product called GitHub Packages that should not be confused with workflow artifacts. GitHub Packages is a package registry that hosts versioned, publishable packages for:
- Container images
- RubyGems
- npm packages
- Maven and Gradle packages
- NuGet packages
The key practical differences are: artifacts are tied to a specific workflow run and expire automatically, while packages are versioned releases intended for long-term distribution. GitHub also charges for data transfer with Packages, whereas artifact downloads within Actions are not charged for transfer.
Uploading Artifacts
Use the actions/upload-artifact action to persist files from the current runner. The action accepts the parameters listed in the table below.
| Parameter | Required | Default | Description |
|---|---|---|---|
name | Yes | artifact | Identifier for the artifact. Used when downloading it in another job or from the UI. |
path | Yes | — | File system path to upload. Can be a single file, a directory, or a glob pattern. |
if-no-files-found | No | warn | What to do when the path matches nothing. error — fail the step; warn — log a warning but continue; ignore — do nothing silently. |
retention-days | No | Repository default | Number of days before this artifact expires. Must be between 1 and the repository's configured maximum (90 for public, up to 400 for private). |
The path can be a single file, a directory, or a glob pattern. Multiple files that match a glob are bundled into a single zip archive under the given name.
Downloading Artifacts
Each job runs in its own freshly provisioned runner. Files created in one job are not automatically visible to another — the runner is discarded when the job ends. To use an artifact in a downstream job, you must explicitly download it with actions/download-artifact, specifying the same name used during upload.
The downloaded files are placed in the directory specified by path. If path is omitted, they land in the current working directory under a subdirectory named after the artifact.
Artifact Size Limits
| Limit | Value |
|---|---|
| Maximum size per artifact | 10 GB (compressed) |
| Included storage | Depends on GitHub plan; excess billed per GB per month |
Using Caches
Artifacts persist the outputs of a job. Caches persist the inputs — typically dependency packages or build tool downloads — so they do not need to be re-fetched on every run. The difference matters:
| Artifact | Cache | |
| What it stores | Build outputs (binaries, reports) | Reusable inputs (dependencies, downloaded tools) |
| Shared between runs | No (per-run) | Yes (across many runs) |
| Invalidated when | Retention period expires | Cache key changes |
Using the Cache Action Explicitly
The actions/cache action saves a directory to the cache and restores it on the next run. A key controls when the cache is considered valid. A common pattern is to include a hash of the dependency lock file in the key so that the cache is automatically invalidated whenever dependencies change.
Built-in Caching via Setup Actions
Many language setup actions — such as actions/setup-node, actions/setup-python, and actions/setup-java — have caching built in. Enabling it is a single line:
Built-in caching is the simplest option when a setup action supports it, as it handles key generation and cache paths automatically.
Cache Key Design
A well-designed cache key balances two goals: reuse the cache as often as possible, and invalidate it whenever the cached content would be stale. Common key components include:
${{ runner.os }}— separates caches for Linux, macOS, and Windows runners${{ hashFiles('**/package-lock.json') }}— changes whenever the lock file changes- A fixed prefix string that identifies the language or tool being cached
The restore-keys: list provides ordered fallbacks. If no exact key matches, GitHub will restore the most recent cache whose key starts with any entry in restore-keys. This gives partial hits — you get most packages from the cache, then only download the delta.
Key Takeaways
- Workflow inputs are declared under a trigger's
inputs:block and accessed with${{ inputs.<name> }}. - Capture a step's output by writing
NAME=valueto$GITHUB_OUTPUTand assigning the step anid:. - Expose step outputs to other jobs via the job's
outputs:block; consume them with${{ needs.<job>.outputs.<key> }}. - Actions that declare outputs in their
action.ymlare accessed the same way as step outputs via the step'sid:. - Artifacts persist files (binaries, reports) beyond a job's lifetime using
actions/upload-artifactandactions/download-artifact. - Caches persist reusable inputs (dependencies) across runs using
actions/cacheor the built-incache:option in setup actions. - Design cache keys to include a hash of lock files so the cache is automatically invalidated when dependencies change.
Complete Working Example
The following is a complete, runnable GitHub Actions workflow that demonstrates every concept covered in this topic. Each significant line includes a comment explaining why it is written that way.
Next topic: Reusable Workflows