To install Precomposer, the following steps need to be taken:
- Create a copy of the Precomposer start template repository
- Start the GitHub workflow
- Register DNS names
- Go to https://github.com/eCubeGmbH/epc-deployment-configurations/ (opens in a new tab) and create a copy (top right button, above the code: "Use this template" -> "Create a new Repository").
- => The name and location of your new deployment repository can be chosen freely.
But when using a different name than
epc-deployment-configurationsyou have to configure the chosen name in the service repositories later (e.g. storefront), so that the services' build pipelines will be able to find and trigger your deployment repository correctly.
- => The name and location of your new deployment repository can be chosen freely. But when using a different name than
- After the successful creation the next steps will be done in your own copy.
Your newly created repository needs to be setup with a few GitHub action variables and secrets: Click Settings -> Secrets and Variables -> Actions. To create a GitHub-Action-Secret switch to the "Secrets" tab and click the green button New repositoy secret". For the GitHub-Action-Variable switch to the "Variables" tab and click the green button New repositoy variable".
Add these as GitHub-Action-Secrets:
|access to GitHub. More info about this token below.
|Kubernetes config for the cluster to be used. More info about this token below.
|only for Digitalocean
|Secret used to find out the external IP of the Digitalocean load balancer.
|only for AWS EKS
|Secret to access the AWS EKS cluster
|only for AWS EKS
|Secret to access the AWS EKS cluster
Details about PERSONAL_ACCESS_TOKEN
It is created via GitHub -> Profile photo -> Settings -> Developer settings -> Personal access tokens -> Generate new token. (More info here (opens in a new tab).) Make sure that the access token is created from the same user who has access to the newly copied Precomposer-Template-Repository.
Where is it used?
- It is used to deploy a generated ssh-key in the repo deployment-configurations, which gives ArgoCD access to it.
- In the k8s cluster, a secret "docker-pull-secrets" is created so that the cluster can fetch images.
So permission scopes for the repository (repo) and the registry (write:packages) are needed.
Details about K8S_CLUSTER_CONFIG
This is the Kubernetes cluster config (also used e.g. for
$KUBECONFIG in the shell).
It can be gathered e.g. like this:
kubectl config view --minify --flatten.
When using Digitalocean you can access the config on the webpage: Switch from the cluster's overview to the tab "Settings", then click onto the blue top right "Actions" button and choose "Download Config".
Paste the entire content of the config file into the secret field when creating this Github-Action-Secret.
Details about DO_ACCESS_TOKEN
This is needed when using a DigitalOcean cluster to determine the cluster's external IP address.
It can be created via DigitalOcean -> API (main navigation left side) -> Generate New Token (gray button top right) (More info here (opens in a new tab).)
Select both read and write scopes.
Next up we need to add secrets for our composable commerce services, e.g. Commercetools and Storyblok. For creating the Commercetools secrets, please follow this guide (opens in a new tab) and read the details below for information about the scopes and why we need two API client credentials. For Storyblok, check these instructions (opens in a new tab) and also follow the details below.
These secrets are prefixed with
TO_K8S_ to indicate that they're written directly to your Kubernetes cluster and have no relevance in GitHub afterwards.
You can either manually add these via the GitHub UI or use the GitHub command line tools (opens in a new tab) if you are familiar with it.
For that case we prepared
github-secrets-example.txt in the root of your new repositoy containing all needed secrets. It also contains an example command which fills them into a given GitHub repository.
Add these as GitHub-Action-Secrets:
|The Auth-Client-ID from Commercetools.
|The Auth-Client-Secret from Commercetools.
|The Client-ID from Commercetools.
|The Client-Secret from Commercetools.
|The Access-Token from Storyblok.
|The Management-Token from your Storyblok account to import content. Only required if you desire to import data.
|The Application ID from Algolia.
|The Search-only API key from Algolia.
* Required when Algolia is used as the search provider for products (further information can be found in the variables section -> TO_K8S_SEARCH_PROVIDER_TO_USE).
Details about the Commercetools credentials
We require two sets of Commercetools credentials, because we need two differently scoped tokens when communicating with Commercetools. For creating a new API client go to Commercetools -> Settings -> Developer Settings and press the button "+ Create new API client".
- AUTH API Client: All secrets containing _AUTH_CLIENT are used to create session related tokens after a user was logged in successfully. The tokens are scoped to only view personal entities (e.g. carts, orders). Use the the "Mobile & single-page application client" template.
- General API Client: All other secrets are used for handling the standard shop requests (e.g. product search, anonymous checkout). Use the "Admit client" template.
Details about the Algolia credentials
- ALGOLIA_APPLICATION_ID: You can find the application ID in your Algolia Dashboard. Navigate to the 'Search' tab, and the application ID will be displayed on the top left side of the page. If you have multiple application IDs, click the arrow button to view them all.
- ALGOLIA_SEARCH_KEY: The Search-only API key in Algolia is a type of API key that provides read-only access to your indices. It's used to enable search capabilities without allowing any modifications to the data. This ensures that users of your application can search your indices but cannot accidentally or maliciously alter them. You can find the api keys in your Dashboard -> Settings -> API Keys
Details about the Storyblok credentials
So effectively you have two separate tokens: One for read-only access by the shop and one to insert and update data by the setup. For the STORYBLOK_MGMT_TOKEN go to Storyblok -> My account -> Account settings -> Personal access token. Click "Generate a new token". For the TO_K8S_STORYBLOK_ACCESS_TOKEN go to your Storyblok Space -> Settings -> Access Tokens. Either generate a new token with access level preview or copy the key of the exisiting one if there is one.
Additionally, add the following GitHub-Action-Variables (if required):
|The Project-Key from Commercetools.
|Determines the search provider. Valid values: "algolia", "commercetools". If unset, defaults to "commercetools".
|The Auth-Client-Scopes from Commercetools.
|The API-URL from Commercetools.
|The Auth-URL from Commercetools.
|Valid index name from your Algolia Dashboard
* Required when Algolia is used as the search provider for products.
Details about TO_K8S_COMMERCETOOLS_PROJECT_KEY
To get the project key go to Commercetools -> My Account (Profile Icon) -> Manage projects and then copy the key of the desired project from the list. The project key is also displayed when creating the AUTH or General API Client.
Details about TO_K8S_SEARCH_PROVIDER_TO_USE
This environment variable specifies the search provider your application will use for querying and retrieving product data.
- algolia: When set to this value, your application will utilize Algolia as its search backend. In the case of Algolia, it's mandatory to provide the Index Name, Application ID, and Search Key!
- commercetools: Selecting "commercetools" means your application will harness the search capabilities of Commercetools. If this variable is unset, the application defaults to using Commercetools.
Note: The value for this variable can be input in a case-insensitive manner, meaning "ALGOLIA", "algolia", and "CommerceTools" are all acceptable.
From your repository, start the workflow "Install Precomposer on k8s cluster". (Main menu of the project -> Actions -> if necessary, close the GitHub introduction dialogue -> Install Precomposer on k8s cluster).
The workflow has the following input parameters:
(How the DNS names are composed of the specified variable values is explained in the next section).
|The base domain, under which Precomposer should be reachable.
|Prefix before base domain.
|Name of the environment to be created. Common names here are
|Base name for the namespace for the core of the Precomposer to be installed in.
PROJECT_ENVIRONMENT will be appended - so e.g.
|Which e-mail address should be used when registering the Lets-Encrypt certificate? Leave default when using
precomposer.shop as base domain. Must be set when using other domain.
|If you want to install to multiple clusters, you can define here the name of the github environment where the cluster config secret is stored.
|The organisation and name of the repository where demo data is stored. If not empty, a new import workflow is triggered.
After submitting the parameters, GitHub will start the workflow and a new "workflow run" will appear after a few seconds.
To watch the progress click on it:
You can also click on the individual jobs:
The execution usually takes a few minutes. However, individual steps may take longer (e.g. waiting for services to start up or registering DNS names).
The DNS names must be registered in the DNS.
If you have specified
precomposer.shop as the base domain, this step is not necessary; the names will be registered for you automatically.
If you use your own domain, then you must set these DNS names to point to the external IP (e.g. load balancer) of the Precomposer Setup. The detected IP is displayed in the Job
!!! MANUAL ACTION HERE: Getting started - there in the step with the same name
!!! MANUAL ACTION HERE: Getting started. If the external IP could be determined by the setup process, then it is displayed. If not, then you have to find it out yourself using means of your cluster provider.
If you are not using
precomposer.shop: As soon as you know the IP, you can (and should) setup the DNS names manually - while the setup workflow is still running. (Background: DNS propagration and Lets-Encrypt can take some time, which can only be done afterwards.)
Make sure that all these DNS names point to the same external IP of your cluster (load balancer).
This is how the DNS names are composed: