Logging and monitoring data migration procedure

Logging and monitoring data migration procedure

This article describes how to migrate applications with the data from one namespace to another. First of all, this procedure is usefull for the migration to version 1.18, where two features (logging and monitoring) were transferred to another namespace. Also, this procedure may be used by the administrators in some other specific scenarios.

Prerequisites

You have installed Kublr cluster v1.16.0, with enabled Logging or Monitoring.

Upgrade without Saving Data

  1. Migrate cluster from one control plane to another as described in documentation.

  2. Edit cluster spec and turn off features logging and monitoring and add custom values for custom PCV.

    Spec patch

     For logging
     logging:
         logCollection:
             enabled: false
    
     For monitoring
     monitoring
         enabled: false
    
  3. Delete helm2 charts for logging and monitoring.

    Ingress patch

     $ helm2 delete --purge kublr-logging
     release "kublr-logging" deleted
     $ helm2 delete --purge kublr-monitoring
     release "kublr-monitoring" deleted
    
  4. Delete PVC for logging and monitoring.

    Ingress patch

     $ kubectl -n kube-system delete pvc -l 'app in (elasticsearch, kublr-monitoring-grafana, kublr-monitoring-prometheus)'
    
  5. Enable feature in spec and wait installation.

     For logging
            logging:
              logCollection:
                enabled: true
    
     For monitoring
            monitoring:
               enabled: true
    

Migration with Saving Data

  1. Migrate cluster from one control plane to another as described in documentation.

  2. If you have custom created pvc for logging or monitoring, please add one of the label app=elasticsearch or app=kublr-monitoring-grafana or app=kublr-monitoring-prometheus to PVC for automatic migration.

  3. Create folder and download scripts prepare.sh and patch.sh.

  4. Open script files and edit path to helm2 in HELM2 variable (minimal Helm v2.14.0 required).

  5. Run prepare.sh script that will:

    • Mark all PV as Retain for delete protection.
    • Backup values, secrets to ./data folder.
    • Create PVCs with the same name in kublr namespace.
  6. Edit cluster spec and turn off features logging and monitoring and add custom values for custom PCV.

    Spec patch

     For logging
     logging:
       logCollection:
         enabled: false
    
     For monitoring
     monitoring
        enabled: false
    
  7. Run patch.sh script that will:

    • Delete old helm charts.
    • Delete old PVC
    • Rebound PV to new PVC.
  8. Wait some 20 sec and check new PVCs status, it should be Bound.

    Ingress patch

     $ kubectl get pvc -n kublr -l 'app in (elasticsearch, kublr-monitoring-grafana, kublr-monitoring-prometheus)'
     NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
     data-kublr-logging-elasticsearch-data-0     Bound    pvc-9d4c97ad-d9fc-4d11-b2b4-eb253383602c   120Gi      RWO            kublr-system   34m
     data-kublr-logging-elasticsearch-master-0   Bound    pvc-e2de333f-3177-486a-9a5e-35a67a968972   4Gi        RWO            kublr-system   34m
     kublr-monitoring-grafana                    Bound    pvc-89dc4a54-7d3d-4c51-9807-1714576cdb2e   10Gi       RWO            kublr-system   34m
     kublr-monitoring-prometheus                 Bound    pvc-7cf1877d-8316-4356-a806-8b6484093e96   120Gi      RWO            kublr-system   34m
    
  9. Edit cluster spec. Turn on features and scpecify custom PVC in spec (if you used another PVC names please use it).

     For logging
            logging:
              logCollection:
                enabled: true
    
     For monitoring
            monitoring:
               enabled: true
               values:
                 grafana:
                   persistence:
                     preconfiguredPersistentVolumeClaim: kublr-monitoring-grafana
                 prometheus:
                   persistence:
                     preconfiguredPersistentVolumeClaim: kublr-monitoring-prometheus
    
  10. Check features status in control plane.

  11. Check that everything is working as is and if you need change PV reclaim policy to Delete.