Uploaded image for project: 'HPCC'
  1. HPCC
  2. HPCC-24806

Explore work needed for copying data between bare metal and cloud

    XMLWordPrintable

Details

    • Improvement
    • Status: Accepted
    • Major
    • Resolution: Unresolved
    • None
    • None
    • cloud
    • Minor
    • Not applicable

    Description

      Notes so far:

      DFU server
      ==========
      Where does DFUserver fit in in a container system?

      DFU has the following main functionality in a bare metal system:
      a) Spray a file from a 1 way landing zone to an N-way thor
      b) Convert file format when spraying. I suspect utf-16->utf8 is the only option actually used.
      c) Spray multiple files from a landing zone to a single logical file on an N-way thor
      d) Copy a logical file from a remote environment
      e) Despray a logical file to an external landing zone.
      f) Replicate an existing logical file on a given group.
      g) Copy logical files between groups
      h) File monitoring
      i) logical file operations
      j) superfile operations

      ECL has the ability to read a logical file directly from a landingzone using 'FILE::<ip>' file syntax, but I don't think it is used very frequently.

      How does this map to a containerized system? I think the same basic operations are likely to be useful.
      a) In most scenarios Landing zones are likely to be replaced with (blob) storage accounts. But for security reasons these are likely to remain distinct from the main location used by HPCC to store datasets. (The customer will have only access keys to copy files to and from those storage accounts.) The containerized system has a way for ECL to directly read from a blob storage account ('PLANE::<plane'), but I imagine users will still want to copy the files in many situations to control the lifetime of the copies etc.
      b) We still need a way to convert from utf16 to utf8, or extend the platform to allow utf16 to be read directly.
      c) This is still equally useful, allowing a set of files to be stored as a single file in a form that is easy for ECL to process.
      d) Important for copying data from an existing bare metal system to the cloud, and from a cloud system back to a bare metal system.
      e) Useful for exporting results to customers
      f+g) Essentially the same thing in the cloud world. It might still be useful to have
      h) I suspect we will need to map this to cloud-specific apis.
      i+j) Just as applicable in the container world.

      Broadly, landing zones in bare metal map to special storage planes in containerized, and groups also map to more general storage planes.

      There are a couple of complications connected with the implementation:
      1) Copying is currently done by starting an ftslave process on either the source or the target nodes. In the container world there is no local node, and I think we would prefer not to start a process in order to copy each file.
      2) Copying between storage groups should be done using the cloud provider api, rather than transferring data via a k8s job.

      Suggestions:

      • Have a load balanced dafilesrv which supports multiple replicas. It would have a secure external service, and an internal service for trusted components.
      • Move the ftslave logic into dafilesrv. Move the current code for ftslave actions into dafilesrv with new operations.
      • When copying from/to a bare metal system the requests are sent to the dafilesrv for the node that currently runs ftslave. For a container system the requests are sent to the loadbalanced service.
      • It might be possible to migrate to lamda style functions for some of the work...
      • A later optimization would use a cloud service where it was possible.
      • When local split points are supported it may be better to spray a file 1:1 along with partition information. Even without local split points it may still be better to spray a file 1:1 (cheaper).
      • What are the spray targets? It may need to be storage plane + number of parts, rather than a target cluster. The default number of parts is the #devices on the storage plane.

      => Milestones
      a) Move ftslave code to dafilesrv (partition, pull, push) [Should be included in 7.12.x stream to allow remote read compatibility?]
      b) Create a dafilesrv component to the helm charts, with internal and external services.
      c) use storage planes to determine how files are sprayed etc. (bare-metal, #devices)
      Adapt dfu/fileservices calls to take (storageplane,number) instead of cluster. There should already be a 1:1 mapping from existing cluster to storage planes in a bare-metal system, so this may not involve much work. [May also need a flag to indicate if ._1_of_1 is appended?]
      d) Select correct dafilesrv for bare-metal storage planes, or load balanced service for other.
      (May need to think through how remote files are represented.)

      => Can import from a bare metal system or a containerized system using command line??
      NOTE: Bare-metal to containerized will likely need push operations on the bare-metal system. (And therefore serialized security information)
      This may still cause issues since it is unlikely containerized will be able to pull from bare-metal.
      Pushing, but not creating a logical file entry on the containerized system should be easier since it can use a local storage plane definition.

      e) Switch over to using the esp based meta information, so that it can include details of storage planes and secrets.
      [Note this would also need to be in 7.12.x to allow remote export to containerized, that may well be a step too far]

      f) Add option to configure the number of file parts for spray/copy/despray
      g) Ensure that eclwatch picks up the list of storage planes (and the default number of file parts), and has ability to specify #parts.

      Later:
      h) plan how cloud-services can be used for some of the copies
      i) investigate using serverless functions to calculate split points.
      j) Use refactored disk read/write interfaces to clean up read and copy code.
      k) we may not want to expose access keys to allow remote reads/writes - in which they would need to be pushed from a bare-metal dafilesrv to a containerized dafilesrv.

      Other dependencies:

      • Refactored file meta information. If this is switching to being plane based, then the meta information should also be plane based. Main difference is not including the path in the meta information (can just be ignored)
      • esp service for getting file information. When reading remotely it needs to go via this now...

      Attachments

        Issue Links

          Activity

            People

              ghalliday Gavin Halliday
              ghalliday Gavin Halliday
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated: