Skip to content

Stack Deployment Guide

This guide describes how to set up a Carbyne Stack Virtual Cloud (VC) consisting of two Virtual Cloud Providers (VCP).



Carbyne Stack has been tested using the exact versions of the tools specified below. Deviating from this battle tested configuration may create all kinds of issues.

In addition, this guide assumes you have access to two properly configured K8s clusters (herein referred to as apollo and starbuck) with the following components:

  • Kubernetes v1.18.19
  • Istio v1.7.3
  • MetalLB v0.9.3
  • Knative v0.19.0
  • Zalando Postgres Operator v1.5.0

Throughout the remainder of this guide, we assume that you have set up local clusters using the kind tool as described in the Platform Setup guide.

Virtual Cloud Deployment


In case you are on a slow internet connection, you can use

kind load docker-image <image> --name <cluster-name>

to load images from your local docker registry into the kind clusters. This way you have to download the images only once and then reuse them across VCP deployments.

  1. Checkout out the carbynestack repository and descend into the repository root directory using:

    git clone
    cd carbynestack
    git clone
    cd carbynestack
  2. Before deploying the virtual cloud providers make some common configuration available using:


    Replace and with the load balancer IPs assigned to the Istio Ingress Gateway by MetalLB (see the Platform Setup guide).

    export APOLLO_FQDN=""
    export STARBUCK_FQDN=""
    export RELEASE_NAME=cs
    export NO_SSL_VALIDATION=true
  3. Launch the starbuck VCP using:

    export IS_MASTER=false
    export AMPHORA_VC_PARTNER_URI=http://$APOLLO_FQDN/amphora
    kubectl config use-context kind-starbuck
    helmfile apply
  4. Launch the apollo VCP using:

    export IS_MASTER=true
    export AMPHORA_VC_PARTNER_URI=http://$STARBUCK_FQDN/amphora
    export CASTOR_SLAVE_URI=http://$STARBUCK_FQDN/castor
    kubectl config use-context kind-apollo
    helmfile apply
  5. Wait until all pods in both clusters are in the ready state.

Preparing the Virtual Cloud

  1. Carbyne Stack comes with a CLI that can be used to interact with a virtual cloud from the command line. Install the CLI using:

    export CLI_VERSION=0.2-SNAPSHOT-2336890983-14-a4260ab
    curl -o cs.jar -L$CLI_VERSION/cli-$CLI_VERSION-jar-with-dependencies.jar
  2. Next configure the CLI to talk to the just deployed virtual cloud by creating a matching CLI configuration file in ~/.cs using:

    mkdir -p ~/.cs
    cat <<EOF | envsubst > ~/.cs/config
      "prime" : 198766463529478683931867765928436695041,
      "r" : 141515903391459779531506841503331516415,
      "noSslValidation" : true,
      "trustedCertificates" : [ ],
      "providers" : [ {
        "amphoraServiceUrl" : "http://$APOLLO_FQDN/amphora",
        "castorServiceUrl" : "http://$APOLLO_FQDN/castor",
        "ephemeralServiceUrl" : "http://$APOLLO_FQDN/",
        "id" : 1,
        "baseUrl" : "http://$APOLLO_FQDN/"
      }, {
        "amphoraServiceUrl" : "http://$STARBUCK_FQDN/amphora",
        "castorServiceUrl" : "http://$STARBUCK_FQDN/castor",
        "ephemeralServiceUrl" : "http://$STARBUCK_FQDN/",
        "id" : 2,
        "baseUrl" : "http://$STARBUCK_FQDN/"
      } ],
      "rinv" : 133854242216446749056083838363708373830

    Alternatively, you can use the CLI tool itself to do the configuration by providing the respective values (as seen above in the HEREDOC) when asked using:

    java -jar cs.jar configure

    You can verify that the configuration works by fetching telemetry data from castor using:


    Replace <#> with either 1 for the apollo cluster or 2 for the starbuck cluster.

    java -jar cs.jar castor get-telemetry <#>

Upload Offline Material

Before you can actually use the services provided by the Virtual Cloud, you have to upload cryptographic material. As generating offline material is a very time-consuming process, we provide pre-generated material.


Using pre-generated offline material is not secure at all. DO NOT DO THIS IN A PRODUCTION SETTING.

  1. Download and decompress the archive containing the material using:

    curl -O -L
    unzip -d crypto-material
  2. Upload and activate tuples using:


    Adapt the NUMBER_OF_CHUNKS variable in the following snippet to tune the number of uploaded tuples. In caseNUMBER_OF_CHUNKS > 1 the same tuples are uploaded repeatedly.

    cat << 'EOF' >
    SCRIPT_PATH="$( cd "$(dirname "$0")" ; pwd -P )"
    cs="java -jar ${SCRIPT_PATH}/cs.jar"
    function uploadTuples {
       echo ${NUMBER_OF_CHUNKS}
       for t in ${tuples[@]}; do
          set -- $t
          for (( i=0; i<${NUMBER_OF_CHUNKS}; i++ )); do
             local chunkId=$(uuidgen)
             echo "Uploading ${type} to http://${APOLLO_FQDN}/castor (Apollo)"
             $cs castor upload-tuple -f ${TUPLE_FOLDER}/${tuple_file}-P0 -t ${type} -i ${chunkId} 1
             local statusMaster=$?
             echo "Uploading ${type} to http://${STARBUCK_FQDN}/castor (Starbuck)"
             $cs castor upload-tuple -f ${TUPLE_FOLDER}/${tuple_file}-P1 -t ${type} -i ${chunkId} 2
             local statusSlave=$?
             if [[ "${statusMaster}" -eq 0 && "${statusSlave}" -eq 0 ]]; then
                $cs castor activate-chunk -i ${chunkId} 1
                $cs castor activate-chunk -i ${chunkId} 2
                echo "ERROR: Failed to upload one tuple chunk - not activated"
    chmod 755
  3. You can verify that the uploaded tuples are now available for use by the Carbyne Stack services using:


    Replace <#> with either 1 for the apollo cluster or 2 for the starbuck cluster.

    java -jar cs.jar castor get-telemetry <#>

You now have a fully functional Carbyne Stack Virtual Cloud at your hands.

Teardown the Virtual Cloud

You can tear down the Virtual Cloud by tearing down the Virtual Cloud Providers using:

for var in apollo starbuck
  kubectl config use-context kind-$var
  helmfile destroy
Back to top