Get support for sagikazarmark/harbor-perf-test

If you're new to LTH, please see our FAQ for more information on what it is we do.

Support Options

Unfortunately, there are currently no active helpers for this repository on the platform. Until they become available, we reccomend the following actions:

View Open Issues

Take a look to see if anyone else has experienced the same issue as you and if they managed to solve it.

Open an Issue

Make sure to read any relevant guidelines for opening issues on this repo before posting a new issue.

Sponsor directly

Check out the page and see if there are any options to sponsor this project or it's developers directly.

sagikazarmark/harbor-perf-test

Harbor Performance Tests

This repo contains tools to run performance tests on Harbor.

Prerequisites

The tools in this repository use AWS for running Harbor and the performances tests, so you'll need an AWS account.

You'll also have to be familiar with Terraform to create test environments.

The performance test suite relies uses k6. Getting to know it a little bit before running large test suites doesn't hurt.

Preparations

Take a look at the example environments under infrastructure/environments/. Create one (or more) environment that reflects what you would like to test (eg. different HA setups or different storage solutions).

Alternatively, you can reuse one of the existing environments:

  1. Create a terraform.tfvars file in the environment directory with the necessary values
  2. Run terraform apply
  3. Run ssh ubuntu@$(terraform output -raw test_executor_ip) to SSH into the test executor machine
  4. Run your tests!

Running performance tests

The Harbor project has a performance test suite of its own: https://github.com/goharbor/perf

The test executor machine comes with this test suite (and the relvant tools) pre-installed.

You can follow the instructions in the repository to run the test suite or you can run the tests manually:

  1. Load test data into Harbor by running k6 run scripts/data/SCRIPT_NAME
  2. Run the individual tests by running k6 run scripts/test/SCRIPT_NAME

For small tests, you might want to consider to set VUs to a lower number than the default in each test case. If you use the original (magefile) execution strategy, you can do that by setting the HARBOR_VUS environment variable.

Cleaning up

Although Terraform should be able to clean up after itself, that's not always the case. For example, it cannot delete S3 buckets while there is data in them. So before destroying the infrastructure, think about resources that might need manual cleanup.

Then run terraform destroy to destroy the test environment.

Further tips

Using an existing EKS cluster

When using an existing EKS cluster, you need to make sure that you can access the cluster (even though you don't have to interact with it). This is necessary because Terraform uses the AWS IAM authenticator internally (or rather mimics its behavior) to generate a short-lived token for accessing the cluster.

To gain access to the cluster, you either have to create the cluster or have to add an entry to the aws-auth configmap in the kube-system namespace. More details in the official documentation.

For example, you can add the following to the mapRoles section in the configmap:

    - rolearn: <ARN of role>
      username: admin
      groups:
        - system:masters

Alternatively, you can use this oneliner to do the same:

ROLE_ARN="<ARN of role>"
ROLE="    - rolearn: ${ROLE_ARN}\n      username: admin\n      groups:\n        - system:masters"
kubectl get -n kube-system configmap/aws-auth -o yaml | awk "/mapRoles: \|/{print;print \"$ROLE\";next}1" > /tmp/aws-auth-patch.yml
kubectl patch configmap/aws-auth -n kube-system --patch "$(cat /tmp/aws-auth-patch.yml)"
rm -rf /tmp/aws-auth-patch.yml

References

Our Mission

We want to make open source more sustainable. The entire platform was born from this and everything we do is in aid of this.

Interesting Articles

Thank you for checking out LiveTechHelper |
2025 © lth-dev incorporated

p-e622a1a2