Blue Moon Backup Documentation

Troubleshooting 

Error "local error: tls: record overflow" 

This message means the connection was corrupted over the network, and the Cloud Backup client aborted the connection.

This can happen because of random network conditions. Retrying the operation should fix the issue.

If the issue keeps happening repeatedly, this message indicates that something is interfering with packets in your network.

  • Failing NIC
  • Bad NIC driver or driver configuration
  • Failing RAM, on either the endpoint machine or any of the intermediate routers
  • Outdated firewall or proxy, performing incorrect SSL interception

For more information, please see the record_overflow section in IETF RFC 5246.

Microsoft SQL Server backup encountered a VDI error 

You should ensure that the necessary VDI .dll files are registered and are the correct version for your SQL Server installation. You can use Microsoft SQL Server Backup Simulator to check the status of the VDI .dll files.

Error "Access is denied" when backing up files and folders on Windows 

An "Access Denied" error message means that the Windows user account running the backup job does not have access to read the file content.

Current versions of the Cloud Backup client automatically create a service account with all necessary permissions to read local files. If you are experiencing "Access Denied" errors on Cloud Backup 18.6.0 or later, you may be trying to back up a network path that has been mounted as a directory. Please see the "Accessing Windows network shares and UNC paths" section below for more information.

If you are experiencing "Access Denied" errors on Cloud Backup 18.6.0 or later, and you are certain that you are not backing up a mounted network path, please contact support.

Antivirus detects Cloud Backup as a virus or malware 

Cloud Backup is a safe application. Any such detection is a "false positive".

When a new Cloud Backup version is released, it might seem like a new, unknown program. An unknown program that installs system services, accesses files on the disk and uploads them to the network, might be considered to be malware if it was installed without consent. Unfortunately it's understandable for an Antivirus product to detect this.

In this situation, there are some actions you can take:

  • Please ensure your Antivirus product is fully up-to-date.
  • Choose to "allow" or "white-list" the file in the Antivirus software. This may send a signal to the Antivirus software vendor that the software is safe (e.g. ESET LiveGrid, Windows Defender Automatic Sample Submission, Kaspersky KSN, etc).
  • Enable Authenticode signing on Windows. This may provide additional "reputation" to the software installer.

Error "backup-tool.exe couldn't be launched. CreateProcess() failed: Access is denied" message 

This error message indicates that something on the PC is blocking the Cloud Backup client's main backup-tool.exe program from running. It's likely this is the antivirus. Please follow the above steps to whitelist the Cloud Backup client in your antivirus application.

Network connectivity errors 

Cloud Backup uploads files to the Cloud Backup Server (or to a cloud storage provider) over the internet. Occasionally, you may see errors such as the following:

  • Couldn't save data chunk:
  • HTTP/1.x transport connection broken
  • net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  • wsarecv
  • wsasend
  • An existing connection was forcibly closed by the remote host
  • dial tcp: lookup [...]: no such host
  • connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

Cloud Backup retries the upload several times, but eventually gives up. After a failed data chunk upload, you may see several more messages of the form Couldn't save data chunk: context canceled while the Cloud Backup client terminates the other upload threads.

Network errors have many possible causes:

  • Customer's PC
  • Customer's network
  • Customer's ISP
  • Internet-wide outages between customer's ISP and your ISP

To troubleshoot these issues, please check:

  • Does the backup succeed if it is retried?

    • Many network errors are temporary and will only occur rarely. In addition, a repeated second backup job will often run faster because many of the existing data chunks have already been uploaded. (Any unused data chunks in the Storage Vault will be automatically cleaned up by the next retention pass.)
  • Does the error message always happen at a certain time of day?

    • It may be possible to reschedule the backup to avoid times of heavy internet congestion.
  • Are there any corresponding messages for around the same time in your Cloud Backup Server logs?

    • This is important to determine the cause of some failures.
    • Some relevant Cloud Backup Server log messages take the form Error saving upload stream or Blocking re-upload of preexisting file

Accessing Windows network shares and UNC paths 

The Cloud Backup client can back up Windows network paths, and back up to Windows network storage (SMB / CIFS). However, because Cloud Backup runs as a service user, there are some issues with authentication to be aware of.

Please note that if you are using Cloud Backup to back up data from a network device, you should prefer to install Cloud Backup directly on the device instead of backing it up over the network. This will also significantly improve performance, as less data needs to be transferred over the LAN.

Mapped network drives 

On Windows, each logged-on user session has its own set of mapped network drives. The service user account is unlikely to have any mapped drives. If you see error messages like WARNING Missing: 'Z:\', this is probably the reason. You can work around this by using a UNC path instead.

Authentication 

If the UNC share requires authentication, the service user account is probably not logged-in to the UNC share. If you see error messages like WARNING Lstat: CreateFile \\?\UNC\...: Access is denied., this is probably the reason.

Workarounds, ranked in order of preference:

  • If you are storing data on a network share, you can work around this issue by switching from Windows network shares (SMB) to a network protocol that has built-in credential support. For instance, a S3-compatible server (e.g. the free Minio server) or an SFTP server.

  • In Cloud Backup, you can work around this issue by adding net use \\HOST\SHARE /user:USERNAME PASSWORD as a "Before" command to the backup job.

    • If you are storing data on a UNC path, you can add this "Before" command on the Storage Vault instead of on the Protected Item. This will ensure it is run for all backup jobs going to that Storage Vault.
  • You can work around this issue by changing the Windows Service to use a different user account.

    • For Cloud Backup 18.6.0 and later, this is the Cloud Backup (delegate service) service.
    • If you are using Cloud Backup on a Windows Server machine that is acting as the Domain Controller, you must choose a domain account.

Error "Couldn't take snapshot: The specified object was not found" using DESlock+ 

Some individual software, while it is launched from the within the users session, it may elevate itself when running or run under the System user account. If this happens, the encryption keys will not be available to that software's process and access will be denied to the containers.

Example scenarios where the above behavior may be experienced are when software is running under a different user context within the users session. The Cloud Backup client runs the backup as a service user account (usually "NT SERVICE\backup.delegate" or "LOCAL SYSTEM" in some cases). The DESlock virtual drive is unavailable for other user accounts on the system.

It's possible to mount a virtual disk globally so all users on the system will be able to access its contents. This is done using the [DESlock+] command line tool

Follow DESlock's own instructions to mount the drive for all users.

Source: https://support.deslock.com/index.php?/Default/Knowledgebase/Article/View/244

Error "VSS Error: Couldn't take snapshot. The shadow copy provider had an unexpected error while trying to process the specified operation" 

Possible causes for this error include:

  • Backing up mapped network drives. You cannot use VSS on network share.

  • Shadow storage on the source drives is not configured or not large enough. The shadow storage size can be checked and manually changed through command prompt:

    • To check the current limit set: vssadmin list shadowstorage
    • To change the limit: vssadmin Resize ShadowStorage /For=X: /On=X: /Maxsize=XX%
  • Microsoft's native snapshot manager is only able to perform one snapshot at a time. If a snapshot process is already running when the backup job starts, then the backup job could fail. Stopping and restarting the Volume Shadow Copy service can resolve this problem. To do this, open an elevated command prompt window and run the following commands:

    • net stop vss
    • net start vss

A reboot of the server has also been known to clean up the snapshot manager correctly, should a service restart not resolve the issue.

  • Multiple backup products installed could cause this error. Many backup solutions have their own proprietary snapshot manager which can cause conflicts with other backup solutions installed on the system.

  • VSS snapshots have been known to fail because an advanced format drive is connected to the machine.

  • Check you have all the VSS writers installed and working. On an elevated command prompt: vssadmin list writers

If you're still having issues, please open a support ticket with your results on the above troubleshooting.

Jobs left in Running state 

Cloud Backup is responsible for closing-off a job log with the server. If the PC is shut down unexpectedly, a job would be left in "Running" state indefinitely.

The following situations will clean up old, inactive "Running" jobs;

  • Running a retention pass
    • For safety reasons, a retention pass requires the Cloud Backup client to temporarily take exclusive control over a Storage Vault. The Cloud Backup client makes a number of checks to verify this exclusivity, but the practical benefit is that when a retention pass runs, all past backup jobs must no longer be running by definition.
  • Running a new backup job on the same device
    • Cloud Backup client can compare lockfiles in the Storage Vault with the running process IDs in Task Manager. If a lockfile was created by a process that is no longer running, that job must have stopped.
  • Running a software update
    • If a software update job completes successfully including a changed version number, the software update process must have terminated all prior jobs. All past backup jobs on this device are no longer running

A future version of Cloud Backup client may automatically clean up Running jobs in additional situations.

Out of memory 

Cloud Backup needs RAM to run. The main cause for this is to hold deduplication indexes; therefore the amount of RAM used is proportional to the size of the Storage Vault.

You might see these error messages:

  • runtime: VirtualAlloc of 1048576 bytes failed with errno=1455 on Windows
  • 0x5AF ERROR_COMMITMENT_LIMIT: The paging file is too small for this operation to complete. on Windows
  • fatal error: out of memory on all platforms

On Linux, when the system is out of memory (OOM), the kernel "OOM Killer" subsystem will immediately terminate a process of its choosing, to free up memory. If you see an error message like signal: killed in the Cloud Backup client on Linux, this means the process was terminated by a user or a subsystem, that might possibly be the OOM Killer. You can check for this in dmesg or kern.log.

You can reduce Cloud Backup's RAM usage by trying to limit how much data is in each Storage Vault. For instance, instead of having multiple devices backing up into a single Storage Vault, create multiple Storage Vaults for each device. This will reduce the deduplication efficiency, but it will also reduce the necessary memory usage.

Trade-offs 

Some trade-offs are possible, that can reduce the Cloud Backup client's memory usage at the expense of other system resource types:

Rescan unchanged files 

This option causes the Cloud Backup client to read more data from the source disk, reading less data from the Storage Vault into in-memory indexes. This can have a varied impact on RAM usage, and may be positive or negative depending on the shape of your directories.

Prefer temporary files instead of RAM 

The "Prefer temporary files instead of RAM" option on a backup job schedule will cause the Cloud Backup client to keep indexes in an on-disk database file, instead of a pure in-memory index. The on-disk database file is mapped into pageable memory, that can more easily be reclaimed by the OS when the system is under memory pressure.

Depending on how you measure the Cloud Backup client's memory usage, this option may not immediately appear to have lower memory usage if your measurement includes mmap disk sections. However, the resident working set is reduced.

There is a major performance penalty for using this option (approximately 5x or worse) and it is not generally recommended.

Error "HTTP 500" in Cloud Backup logs 

If you see an HTTP 500 error message in the backup logs, this means the server encountered an error.

If you see this while performing an operation to Cloud storage, then the cloud storage provider experienced an error at their end.

  • The error message may contain more detail; or
  • You can contact the cloud provider for more information; or
  • The operation may succeed if you retry it a short time later.

Change of hardware causes registration dialog to appear 

The Cloud Backup client detects the current device based on a hardware ID.

The hardware ID may be changed in some situations:

  • if you replace the motherboard or CPU; or
  • if you upgrade the BIOS / UEFI, without preserving hardware IDs; or
  • if you virtualise a physical server; or
  • if you migrate a VM guest to a different VM host, without preserving hardware IDs; or
  • if you install "sandboxing" software, or install certain PC security software that includes a "sandboxing" feature (e.g. Comodo Containment); or
  • if you make certain specific modifications to the operating system.

In these situations, the device's hardware ID will change, and Cloud Backup will recognize the PC as a new device.

Handling a changed device ID 

If your device is recognized as a new device, you should register it again.

The original backup data is still preserved in the Storage Vault, and will be deduplicated against any future backups from this device.

You can move the Protected Item settings from one device to another, by using the Copy/Paste buttons in the web interface on the Protected Items tab.

The old device should be revoked once the new device has been properly set up to prevent it from incurring a license charge, alternatively, removing all Protected Item's from the old device will also prevent the device from being charged.

The backup job log history will be preserved in the customer's account, but, it will be associated with the old device.

  • Once you de-register the original device, it would show as "Unknown device (XXXXX...)" in the job history.

  • The customer can still see these old jobs in the Cloud Backup Server web interface.

  • The customer can still see these old jobs in Cloud Backup if they use the filter option > "All devices".

Because the device is detected as a new device, the billing period for this device will be restarted.

Storage Vault Locks 

Lock files are an important part of Cloud Backup's safety design. Cloud Backup uses lock files to ensure data consistency during concurrent operations.

Problem statement 

Cloud Backup supports multiple devices backing up into a shared Storage Vault simultaneously. But when it runs a retention pass to clean up data, it's very important that no other backup jobs are running simultaneously.

A retention pass (A) looks at what data chunks exist in the Vault, then (B) deletes the unused ones.

A backup job (A) looks at what data chunks exist in the Vault, then (B) uploads new chunks from the local data, and uploads a backup snapshot that relies on both pre-existing and newly-uploaded chunks.

It's perfectly safe for multiple backup jobs to run simultaneously, even from multiple devices.

But, it is not safe for a retention pass to run at the same time as a backup job, because if the steps are interleaved (retention A > backup A > retention B > backup B) then a backup job might write a backup snapshot that refers to unknown chunks, resulting in data loss.

Cloud Backup must prevent you from running a backup job and a retention pass simultaneously.

Lock files 

In order to check whether a retention pass is currently running, communication must occur between all devices that could potentially be using the Storage Vault.

In order to determine whether any other device is actively using a Storage Vault, a temporary text file is written into the Storage Vault, and deletes it when the job is completed. This is the only mechanism supported across all Storage Vault types (i.e. local disk / SFTP / S3 / etc). Then, any other job can look for these files to see what other operations are taking place concurrently.

Jobs in a Storage Vault are classified into two categories:

  • Exclusive (retention passes)
  • Non-exclusive (backup/restore jobs)

Multiple non-exclusive jobs may run simultaneously from any device. A non-exclusive job will refuse to start if any exclusive jobs are currently running. An exclusive job will refuse to start if any other jobs are running.

Specifically:

  • If a backup job is currently running, Cloud Backup will refuse to start a retention pass.
  • If a retention pass is currently running, Cloud Backup will refuse to start a backup job.

Downsides of lock file design 

If the Cloud Backup client is stopped suddenly (e.g. PC crash), the lock file would not be removed. All other the Cloud Backup client processes would not realize that the job had stopped. This could prevent proper functioning of backup jobs and/or retention passes.

Cloud Backup will alert you to this issue by failing the job. The error message should explain which device and/or job was responsible for originating the now-stale lock file.

You may see error messages of the form:

  • Locked by user '...' on this device (PID #...) since ... (... days ago)
  • Locked by user '...' on computer '...' (PID #...) since ... (... days ago)
  • However, the responsible process might have stopped.
  • If you investigate this process, and are absolutely certain it won't resume, then it's safe to ignore it and continue.

It is possible to delete lock files to recover from this situation. However, it is critical that you manually investigate the issue to ensure that the responsible process really has stopped. Consider that a PC may go to sleep at any time, and wake up days - or weeks - later, and immediately resume from the middle of a backup or retention operation; if the lock files were removed incorrectly, data loss is highly likely.

If you are sure that the responsible process is stopped, you can delete the lock files.

You can initiate this either

  • in Cloud Backup, on the "Account" pane > right-click Storage Vault > "Advanced" > "Clean up lock files" option

Automatic unlock 

Cloud Backup will automatically delete stale lock files when it determines that it is safe to do so.

  • When the Cloud Backup client is running on the same PC as a potentially-stale lock file, it can check the running processes to see if the originator process is still running.

A future version of the Cloud Backup client may be able to automatically detect and remove stale lock files in more situations.

Recovering from unsafe unlock operations 

If you encounter a Packindex '...' for snapshot '...' refers to unknown pack '...', shouldn't happen error, a data file has been erroneously deleted inside the Storage Vault. Data has been lost. This can happen if the "Unlock" feature is used without proper caution as advised above.

In this situation, you can recover the remaining data in the Storage Vault by following the instructions in the "Recovering from file corruption" section above.

Backup process stalled on "Preparing Storage Vault for first use" 

The first step on accessing a new, uninitalised Storage Vault is to generate and store some encryption material.

If a backup to a new Storage Vault seems to hang at this initial step, it's likely that Cloud Backup is failing to access the storage location, and repeatedly retrying- and timing-out. An error message may appear after some extended duration.

Some possible causes of this issue are

  • Storage Vault misconfiguration
    • For Storage Vaults located in a Server bucket: check the Storage Vault properties > "Hostname" field, that it points to a valid URL and not e.g. 127.0.0.1
    • For Storage Vaults using cloud bucket credentials: double-check the credentials, and ensure there are no extra spaces pasted around the field values
  • Outdated CA certificates
    • This would prevent Cloud Backup from making an HTTPS / SSL connection to the storage location.
    • On Windows, run Windows Update
      • For Storage Vaults located in a Server bucket, you can also check if the system Internet Explorer browser is able to load the Cloud Backup Server's web interface
    • On Linux, update the ca-certificates package

Error "Media is write protected" backing up OneDrive with VSS 

To save on disk space, OneDrive (and some other cloud storage providers) use a system where some files are only "virtually" stored on the local disk, and are materialized from the cloud storage on-demand.

When you use the "Take filesystem snapshot" option, the Cloud Backup client takes a VSS snapshot of the disk. This is a read-only snapshot.

When you back up the OneDrive directory with VSS enabled, OneDrive is not able to download files into the snapshot, because the snapshot is read-only. This causes the "Media is write protected" error message.

In this situation, your OneDrive data is not being protected and is not available for restore.

You can workaround this issue by creating two Protected Items: one with VSS enabled, that excludes the OneDrive directory; and a second one with VSS disabled, that only includes the OneDrive directory.

Note that if OneDrive needs to materialize a lot of data from the cloud, then backing up the OneDrive directory may cause a lot of data to be downloaded.

A future version may avoid this issue by automatically disabling VSS for the OneDrive directory.

Error "Access to the cloud file is denied" backing up OneDrive 

To save on disk space, OneDrive (and some other cloud storage providers) use a system where some files are only "virtually" stored on the local disk, and are materialized from the cloud storage on-demand.

If you encounter the "Access to the cloud file is denied" error message, this means that file in question does not exist on the local PC, and the OneDrive virtual filesystem driver is refusing to download this file on-demand.

At the time of writing, the only available workaround is to disable the "Files-On-Demand" feature in OneDrive. However, this may cause an unacceptable increase in local disk usage for some customers.

To disable the "Files-On-Demand" feature in OneDrive:

  1. Right-click OneDrive in the System Tray
  2. Click the menu icon -> Settings -> Settings tab -> "Files-On-Demand" section -> disable the "Save space and download files as you use them" option

Error "EFS-encrypted files may be unusable once restored" 

You may see a warning of this form in the backup job logs:

EFS-encrypted files may be unusable once restored, unless you also backup the EFS encryption keys from this PC.
To disable this warning, please ensure you have backed up the EFS encryption keys, and then enable the 'I confirm EFS keys are exported' option in the Protected Item settings.

EFS is a Windows feature that allows you to encrypt individual files on disk. The backup job was successful, but if you restore the data to a new PC, the files might not be readable because the EFS encryption keys are tied to the Windows user account. In effect, the backup might not be restorable in a practical sense.

For more information, please see the full article on EFS in the "Protected Items" section,

Error "The target path 'X:\WindowsImageBackup' already exists - please safely remove this directory and retry the backup." 

The "Windows System Backup" Protected Item type uses the wbadmin program to write a disk image to the spool directory; backs up the spool directory; and then cleans up the spool directory. The Cloud Backup client automatically removes this directory after the backup, even if the backup failed.

If the directory exists at the start of a backup job, this could mean either

  1. the Cloud Backup client did not have the chance to clean up the directory (e.g. the PC was not shut down safely); or
  2. another Cloud Backup job is running simultaneously; or
  3. another non-The Cloud Backup client software on the PC is also using the wbadmin functionality for System State or Windows System Backup.

You can avoid case 2 above by using the "Skip if already running" option.

It's not generally possible to distinguish between case 1 and case 3 above. If you look at the job history or the customer's PC, and you are able to make a positive distinction between these cases, it may be safe to delete the directory.

You can temporarily add the following command as a "Before" command to the backup job:

rmdir /S /Q "X:\WindowsImageBackup\"

You should then remove this command from the job settings after the command has run, because this command would cause problems if two Cloud Backup jobs ever run simultaneously in the future.

Error "too many open files" 

A file handle is an abstract concept that includes network connections, temporary files, and disk files.

If you experience the too many open files error message, this means the Cloud Backup client is either

  • (A) running at the same time as another process with high file-handle usage; or
  • (B) has been restricted to use a very low available amount of file handles; or
  • (C) is itself using an excessive number of file handles

During a backup job, the Cloud Backup client uses approximately

  • approximately 10-20 handles for files being read from the disk; and
  • approximately 10-20 handles for open network connections; and
  • an unknown number of temporary cache files created during the "Building cache" phase

You may be able to work around this issue by

  • raising the kernel file handle limit (described below); or
  • ensuring no file-intensive processes are running at the same time as the Cloud Backup job; or
  • ensuring multiple Cloud Backup jobs are not running simultaneously; or
  • for "File and Folder" protected items, by enabling the "Rescan unchanged files" option. This reduces the number of temporary files that the Cloud Backup client uses for local caching. However, it may reduce the backup performance.

If you discover your device has a different limit than the Operating System default, you should find the configuration file where the limit has been altered.

A future version may redesign the cache mechanism to use fewer local temporary files.

On macOS 

macOS supports system and per-process limits on the number of open file handles.

The default limits are fixed at a quite low value (at the time of writing: 12288 system-wide, 10240 per-process).

You can check the current system-wide limits by running: sysctl kern.maxfiles

You can check the current per-process limit by running: sysctl kern.maxfilesperproc

On macOS 10.12.x and later, you can raise the system-wide limits by creating a .plist file in the /Library/LaunchDaemons/ directory. Please see these instructions for more information.

On macOS 10.11.x and earlier, you can raise the system-wide limits by updating the kern.* settings in /etc/sysctl.conf. Please see these instructions for more information.

On Linux 

Linux supports system limits, per-process soft limits and per-process hard-limits on the number of open file handles.

The default limits are fixed at a quite high value (1048576 on Debian 10 "Buster"). However, the limits may have been lowered by the system administrator, especially if the Linux PC is a multi-tenant server, web server, container server or OpenVZ server, in order to provide a limited but consistent experience to the system tenants. Any installed Linux Security Module (LSM) such as SELinux or AppArmor may also impact the value.

Any new child process will inherit the parent process's limit values.

Per-process limits 

You can check the current limits for new processes spawned by the current user account, by running: ulimit -n

You can check the available hard and soft limits for any process by running (e.g.):

  • Find the PID of each Cloud Backup process: pidof backup-tool
  • Check the soft/hard limits for the PID: grep files "/proc/PID/limits"

You can update the current limit for new child processes, by running: ulimit -n 10485760 (for a 10x raise from the default) or ulimit -n unlimited.

  • Only the root user has permission to raise their own ulimit.
  • This will only affect newly spawned child processes. Existing processes will retain the previous limit. You should restart Cloud Backup for the changes to take affect; or, you should restart the whole PC for the changes to affect all running processes, however, any changes may not survive a reboot.

You can set the per-process file handle limit for processes spawns by systemd, by adding a LimitNOFILE=... stanza to the systemd unit file. The infinity value is supported.

System-wide limits 

You can view the current system-wide number of open file handles by running: cat /proc/sys/fs/file-max.

You can view the current system-wide number of open file handles by running: cat /proc/sys/fs/file-nr, or, by running lsof | wc -l. The values may differ slightly.

The default system-wide limit might be currently set

  • in the /etc/security/limits.conf file, or
  • by any file in the /etc/security/limits.d/ subdirectory, or
  • in the /etc/sysctl.conf file.

You can update the system-wide limit by running:

echo "fs.file-max = 10485760" >> /etc/sysctl.conf
/sbin/sysctl -p 

This raises the system file handle limit 10x higher from the default, and then reloads the sysctl variables. On Debian, the sysctl program is in the procps package that also provides the pidof and ps programs.

On Windows 

Each process has a limit on the total number of open handle objects. The maximum number of open handles is 16711680 (16 million) on x86_64 versions of Windows. However, it may be lower on other CPU architectures, or it might lowered by a system administrator via Group Policy.

There is also a per-session limit on the number of opened files over the network. The default value is 16384; you can see this by running: net config server.

You can see the current per-process handle count from Task Manager, on the Details tab by enabling the optional column "Handles".

Error "Couldn't decrypt Vault contents" message in job reports 

The Cloud Backup client tried to access the Storage Vault, but it contained data using an unknown encryption key. Probably this Storage Vault is using the same data location as another Storage Vault (from the same- or a different- user account).

Each Storage Vault in a user's profile is automatically encrypted on first use, with a randomly generated key. If you reuse the data storage location that was already used by another user's Storage Vault, the Cloud Backup client would not know the encryption key for the Storage Vault, and would be unable to access it.

If you intended to share the same Storage Vault between multiple users, you should log their devices into the same account. Otherwise, you should use a different physical location for each Storage Vault.

Error "permission denied" when restoring from a Local Path Storage Vault on macOS 

In Cloud Backup for macOS since 18.6.x, when you create a local path Storage Vault, the files are created by a background service account, using its own file permissions.

However, in current (18.8.x) versions of the Cloud Backup client, restores are performed as the normal user account. Your normal user account may not have the necessary permissions to access the local path folder if it was created by the background service account.

You may see error messages of the form:

  • Couldn't retrieve a list of snapshots from this Storage Vault
  • Couldn't connect to Storage Vault: Can't access Storage Vault: Open: open *YOUR-STORAGE-VAULT-DIRECTORY* : permission denied

To fix this, use "Get Info" on the Storage Vault's folder to change permissions to allow for read/write, and use the cog menu to choose 'Apply to enclosed items...'.

A future version of the Cloud Backup client may solve this problem by automatically setting file permissions.

Error "A specified logon session does not exist. It may already have been terminated" when accessing a Windows network share (SMB) 

Some SMB servers don't seem to accept multiple SMB login sessions, if they use the same SMB credentials from the same host from different Windows user accounts.

On Windows, each user session has its own set of network login sessions. the Cloud Backup client performs the backup using a service user account. Therefore, this issue could occur with an affected SMB server if the interactive Windows user was logged in to the network share simultaneously.

Known affected SMB servers 

If you are experiencing this issue, please contact support so that we can document any affected OSes and versions.

This issue is known to affect some Synology NAS devices.

  • Workaround: Enable SMB3 in Synology DSM web interface (requires Windows 8 or later client OS)

Verifying the issue 

The "Run as Administrator" session also has its own separate network login sessions.

  • Are you able to browse the network share from both an elevated and unelevated application? (e.g. "File > Open" from notepad and from a notepad launched with "Run as Administrator")

The interactive user account may have an open network session to the affected device.

  • From a command prompt, can you run net use to see if the interactive user is logged in to the network share?

  • From a command prompt, if you run net use \\server_name\share_name /DELETE to log the interactive user out of the network share, does this allow the backup to proceed?

    • Note that this may cause the same error to affect the interactive user. You may have to run this same command as an After command in the backup job, to log the background service account out of the network share, so that the interactive user can log back in.

Workarounds 

If you are backing up from the SMB server, you could work around this issue by

  • installing Cloud Backup directly on the network device (e.g. for Synology, enabling SSH access and install the command-line Linux version), to back up the files directly. This may have significantly better performance, as less data needs to travel across the LAN and the disk access latency is significantly improved.

Alternatively, by

  • logging the interactive user account out of Windows entirely before the job starts. This will close their network sessions. Cloud Backup will successfully log in to the network share; you should add net use /DELETE as an After command in the Cloud Backup client to ensure the service user logs out again after the backup

Alternatively, by

  • logging the interactive user account out of the network share before the job starts, and logging them back in afterward. You may be able to use Windows Task Scheduler for this with the "Run only when user is logged on" option to ensure that the commands run inside the correct logon session:
    • net use /DELETE in Task Scheduler before the backup window, to log the interactive user out
    • Enter network credentials in the Cloud Backup client, so the service user logs in
    • net use /DELETE as an After command in the Cloud Backup client, so the service user logs out
    • net use in Task Scheduler after the backup window, to log the interactive user back in again

If you are backing up to the SMB server, you could work around this issue by

  • enabling the SFTP system, or the Minio S3-compatible app, and configuring your Storage Vault to use that instead. These protocols support explicitly entering credentials, that should avoid this issue.

Error "0xc0000142" in KERNELBASE.dll starting services on Windows Server 2012 R2 

Windows was unable to launch the Cloud Backup client's process because of an internal error.

In our experience, this issue can be resolved by running Windows Update. Please ensure Windows Update is fully up-to-date on this PC.

If this does not resolve the issue, please contact support for further assistance,

Multiple devices are detected as being the same device 

the Cloud Backup client tells machines apart by their "device ID". This is automatically determined from a mix of hardware and software identifiers.

One possible cause of this issue is if the two VMs were originally clones of each other. If you have cloned a VM in the past, it might have the same hardware and software identifiers, and so appear to the Cloud Backup client as the same device.

If multiple devices appear to the Cloud Backup client as the same device, they will share the same Protected Items and job scheduling. This causes follow-on issues for logging and reporting.

You can resolve this issue by changing the hardware or software ID for the affected VM. This will influence the Cloud Backup client's device ID to force the devices to be detected as different devices.

On Linux, the SSH host keys are one signal that influences the generated device IDs. Installing SSH, or regenerating the SSH host keys, will cause the device ID to change.

On Windows (with version 18.9.9 or later), you can add extra data to influence the generated device ID by creating a registry key.

  1. Open Registry Editor (regedit.exe)
  2. Browse to the HKEY_LOCAL_MACHINE\Software\cometbackup folder key, creating it if it does not already exist
    • The HKEY_LOCAL_MACHINE\Software\backup-tool folder key is also recognized (with version 18.12.2 or later)
  3. Create a "String Value" with name DeviceIdentificationEntropy
  4. Set any random text as the Data value. This value will influence the generated device ID.
  5. Restart all Cloud Backup services (e.g. backup.delegate and backup.elevator)

Can't use %USERPROFILE% in selected backup paths on Windows 

the Cloud Backup client does not expand Windows environment variables in the path selection.

The Cloud Backup app runs backup jobs as a dedicated service account. If the %userprofile% environment variable was expanded, it would refer to the "wrong" user account.

You can back up the %USERPROFILE%\Documents directory for all users, by

  • including the entire C:\Users directory, and
  • excluding other parts of it (e.g. pattern C:\Users\**\Downloads or regex ^C:\\Users\\[^\\+]\\Downloads).

Error "ciphertext verification failed" when using a Storage Vault 

This error message can occur either immediately, when running any backup or restore operation; or, it can occur part-way through a job.

Error occurs immediately 

If this error message occurs immediately, it means the Cloud Backup client was unable to connect to the Storage Vault at all, because the encryption key in the user's Storage Vault settings does not match the files in the /keys/ subdirectory in the data storage location.

When a Storage Vault is used for the first time, the Cloud Backup client generates a random encryption key, and stores it in an encrypted form in both (A) the user's profile, and (B) in the /keys/ subdirectory in the data storage location. It's important that these match at all times. If they do not match, Cloud Backup will be unable to use the data inside the Storage Vault.

You may potentially encounter this issue in the following situations:

  • If the first time this Storage Vault was used, multiple backup jobs ran simultaneously

    • The first-time initialization make take a few seconds. If multiple initialization jobs were running simultaneously, this may have caused a conflict when saving the encryption key into the user profile
  • If you were performing a Seed Load, but...

    • created a new Storage Vault in the client instead of reusing the existing one; or
    • did not copy the /keys/ subdirectory or the top-level config file; or
    • misconfigured the "subdirectory" or "path" option in the Storage Vault settings
  • If you change the Storage Vault location to point to another user account's Storage Vault

    • Data locations cannot be simply reused by multiple user accounts - the other user account would have a different encryption key. If you want to share a single data storage location between multiple customers, you should have both customers log in to the same account as devices, so that they can share the Storage Vault settings including the encryption material.

Error occurs part-way through a running job 

This indicates that a file inside the Storage Vault is corrupted. Please run a "Deep Verify" action on the Storage Vault, and see the "Data validation" section in the Appendix for more information.

Error "not a supported backup storage location" backing up System State to a USB flash drive 

The "Windows Server System State" and "Windows System Backup" Protected Item types in the Cloud Backup client inherit some restrictions from the underlying technology (wbadmin).

It's not officially possible to spool the backup job to a USB flash drive:

"You cannot store backups on USB flash drives or pen drives." - https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753528(v=ws.11)

This is a preventative measure in case the drive is removed mid-backup.

The following workarounds are available:

  1. Modify the flash drive to appear as a non-removable disk

For more information, please see https://social.technet.microsoft.com/Forums/windowsserver/en-US/c39c050f-d579-4222-8ad1-44d2ff53882b/windows-backup-cannot-see-usb-flash-drive?forum=windowsbackup#03e336a4-bc68-46f6-9a4c-be6907903da6

  1. Create a shared network directory on the flash drive, and tell the Cloud Backup client to use the UNC network path as the spool directory instead

For more information, please see https://social.technet.microsoft.com/Forums/lync/en-US/e1f0fa4e-5fb0-4749-82d6-16b1bd427495/when-will-windows-server-backup-allow-usb-flash-drives-as-a-target?forum=windowsbackup#fc8affd9-378a-447e-a36e-c5af0bb1e40b

Error "The service did not start due to a logon failure" 

This error may affect Cloud Backup or Cloud Backup Server. Both products make use of a background Windows service.

Please check in Windows Event Viewer for more detail about the error message. There may be an error report in the "System" log category, from the "Service Control Manager" source.

One possible reason is that an account password was specified incorrectly.

Another possible reason for this issue is if the right to log on as a service is denied for this user account. Normally the installer asserts this policy during installation; however, on a domain-joined machine, it might have been overwritten by Active Directory policy.

  • If the machine is domain-joined, please check the Active Directory policy.
  • If the machine is not domain-joined, you can check this policy inside gpedit.msc; on the left-hand tree, expand "Computer Configuration" > "Windows Settings" > "Security Settings" > "Local Policies" > "User Rights Assignment"; then open the "Log on as a service" item. The target user should appear in this list, or, the target user should be a member of a Windows group that appears in this list.

Slow backup jobs 

There are many possible reasons why a backup job might be slow.

Recent changes 

Did the issue suddenly start happening, on a certain time?

  • New software
    • Any recently-installed software might change the performance profile of the customer's PC.
    • On Windows, check in "Programs And Features" and sort by Date to see any recently-installed software
    • Does the issue coincide with a the Cloud Backup client software update?

Customer PC performance 

Are multiple customers experiencing the issue, or just a single customer? This helps determine whether the issue is related to your general/server-side infrastructure or whether the issue is related to the customer's environment.

  • Antivirus

    • Many antivirus programs will scan each file as the Cloud Backup client reads them, including but not limited to ESET NOD32 and Windows Defender.
    • Does it help to exclude the Cloud Backup client's backup-tool.exe program in the antivirus software?
      • the Cloud Backup client 19.3.13 and later automatically does this for Windows Defender.
    • Does the antivirus process show as having high usage in Task Manager when the backup is running?
  • Use of slow settings

    • Ensure the "Limit backup to use only 1 disk thread" option is not enabled
    • Ensure the "speed limit" option is not enabled
    • Ensure the "Prefer temporary files instead of RAM (slower)" option is not enabled
    • Toggle the "Rescan unchanged files" option, to see if it increases- or decreases- performance
  • RAM usage

    • With large (multi-TB) Storage Vaults, there are many different data chunks that could be deduplicated against. Cloud Backup will start to use a few GB of RAM to hold all the indexes for deduplication. If the local PC is low on RAM, it may use the swapfile / pagefile, that can significantly reduce performance.
  • CPU usage

    • the Cloud Backup client compresses and encrypts all data before upload. On weak CPUs this may cause high CPU usage. The CPU usage may become a bottleneck.

Storage performance 

Check what kind of disks the customer is backing up.

Check what kind of storage the customer is using.

Check where the temporary directory is for the backup service user account.

  • Avoid backing up files from a network share

    • If you are backing up files from a network location, the Cloud Backup client must make many network roundtrips to access the data. It may be substantially faster to install the Cloud Backup client on the network device instead.
  • Backup storage on the same volume as the backup source

    • Using a mechanical harddrive for multiple tasks simultaneously may reduce its performance from the sequential-level down to the random-level, even for sequential tasks.
  • Backup source is a single-queue block device

    • the Cloud Backup client issues many requests to the source disk in parallel. To avoid negatively affecting other programs on the PC, the Cloud Backup client tries to access the source disk at a low OS priority, but this may be ineffectual if your disk only supports a single queue. You can toggle the "Limit backup to use only 1 disk thread" option to force the Cloud Backup client to make only a disk request to the source disk at a time. This may have a positive effect on other programs on the PC, at the expense of backup job performance.
  • Use of external harddrives

    • Is it USB 2 or USB 3?
    • Some disk drives may experience slow performance. You can use a benchmarking tool to determine the expected performance of the USB drive both in sequential reads, and in small random reads) independently of the Cloud Backup client, as a baseline to compare against the Cloud Backup client's performance.
      • At the time of writing, CrystalDiskMark is a popular freeware software for measuring disk performance on Windows.
    • Performance Mode
      • There is an option in Windows to control whether USB drives are configured for "Quick removal" (default) or "Better performance". Switching to the latter mode can significantly improve performance, but requires you to safely eject the drive. To change this setting:
        1. Open Device Manager > Disk drives > Properties > Policies tab
        2. If the "Quick removal" / "Better performance" radio option is available, ensure it is set to "Better performance"
        3. If the "Enable write caching" checkbox option is available, ensure that it is enabled
  • Backing up direct to cloud storage

    • Check the customer's internet connection
    • Check the service provider's status page, to ensure they are not currently experiencing any errors
  • Backing up to Server Storage Role bucket

    • Check the customer's internet connection
    • Check the end-to-end latency of the storage, from the customer's PC through to the final storage location. High latency can reduce backup performance
    • Ensure the Cloud Backup Server is not experiencing high CPU or RAM usage.

Error "ERR_SSL_VERSION_INTERFERENCE" connecting to Cloud Backup Server 

Cloud Backup Server 19.3.9 added support for TLS 1.3.

The latest 1.3 version of TLS reduces connection latency and improves connection security. Cloud Backup will continues to support genuine TLS 1.2 connections.

If you see the ERR_SSL_VERSION_INTERFERENCE error, this message means that your web browser and the web server tried to use the latest TLS 1.3 standard, but, something inbetween them does not support TLS 1.3. Your web browser chose to abandon the connection rather than downgrading the connection security to TLS 1.2.

If there are middleboxes or software on the network path, that expect to be able to intercept SSL traffic but do not support TLS 1.3, then all TLS 1.3 connections will fail. These middleboxes or software on the network path may need a software update to support TLS 1.3.

  • Are you using any "web security" software that intercepts SSL certificates? (e.g. ESET Internet Security, Symantec Web Security Service)

  • Are you behind a corporate or education network proxy that intercepts SSL certificates? (e.g. BlueCoat / Microsoft Forefront TMG)

  • Is the web browser up-to-date? Some web browsers use an early draft of the TLS 1.3 standard that might be incompatible with the final TLS 1.3 used by Cloud Backup Server

  • A last-resort is to disable TLS 1.3 in your web browser, so that the web browser connects with TLS 1.2.

Error "mysqldump: Couldn't execute 'SHOW PACKAGE STATUS WHERE Db = '[...]'': You have an error in your SQL syntax [...] (1064)" when backing up MySQL 

This error can occur if you are using a version of mysqldump from MariaDB 10.3 prior to July 2019, connecting to an older MySQL database.

This version of mysqldump does not correctly limit itself to the remote server's capabilities.

For instance, this issue can occur with the defalt mysqldump in Debian Buster 10.0.

You can read more about this issue on the MariaDB bug tracker:

Workaround 

You can work around this issue by disabling backup of stored procedures.

If this is acceptable, you can perform this workaround by stripping the --routines parameter that the Cloud Backup client passes to mysqldump. To do so on Linux,

  1. Create a file with the following content:
    #!/bin/bash
    # This program is a wrapper for mysqldump that removes the --routines argument,
    # to work around issue MDEV-17429 with older MySQL servers
    args=("$@")
    for ((i=0; i<"${#args[@]}"; ++i)); do
    case ${args[i]} in
        --routines)
            unset args[i];
        break;;
    esac
    done
    /usr/bin/mysqldump "${args[@]}"
  2. Save this file as /opt/CometBackup/mysqldump-no-routines
  3. Mark the file as executable: chmod +x /opt/CometBackup/mysqldump-no-routines
  4. In the Protected Item settings, set "custom mysqldump path" to this file

Error "Incorrect function" 

This error message indicates the application tried to do something not supported by the disk, but all the Cloud Backup client is doing is reading those files and directories. If this error happens for a normal local disk, then that is certainly supported functionality.

One possibility is that the disk driver is reporting this error message as a symptom of disk corruption when it fails to read sectors for those files.

  • Check if it is possible to open the affected files in any normal app.
  • Use the disk health tools on the device. In Windows Explorer > "This PC" > right-click the drive affected (shown in the error) > "Tools" tab > "Check for errors".

Error "operation not permitted" macOS 

Since macOS 10.14, Apple has introduced a new privacy flow. The user is now asked for permission when an app requires access to certain features or functions. The user will need to explicitly grant "Full Disk Access" to the Cloud Backup application.

  1. Open the System Preferences (Apple menu)
  2. Select "Security and Privacy" > "Privacy" tab
  3. Select "Full Disk Access"
  4. Add Cloud Backup

Error "The specified backup storage location has the shadow copy storage on another volume" using Windows System Backup 

This error is not specific to backups. You may find more information online.

There is a problem with using the selected spool directory. The spool drive has its shadow storage configured in an unusual way that is incompatible with the wbadmin tool.

Troubleshooting 

  • What kind of drive is the spool target? Is it a network share, or an external harddrive, or a SAN? If it is a SAN, is it a managed appliance?

  • What is the output of running this command as Administrator: vssadmin list shadowstorage

Workaround 

You may be able to workaround this issue by creating a network share on the same drive, and entering the UNC path to the share instead of the actual local drive letter.

Error "Dirty Shutdown" when restoring Exchange EDB content 

Depending on the state of the last Exchange Server backup job, you may need to merge log files into the EDB file before it can be accessed. You can do this with the eseutil program included in Exchange Server.

For example, if the database was restored to D:\restore-edb:

  • Check EDB file state: eseutil /MH "D:\restore-edb\File\Mailbox.edb"
  • Apply log files: eseutil /R E00 /D "D:\restore-edb\File" /D "D:\restore-edb\Logs" /S "D:\restore-edb\Logs"

For more information, see this Microsoft article.

Error "Data error (cyclic redundancy check)" while backing up data 

Cloud Backup tried to read a file from the disk for backup, but Windows was unable to provide the file content.

This specific error message comes from the disk driver. If the local disk is a HDD or SSD, the most common cause is a bad sector in the physical hardware: please use the chkdsk tool to schedule a boot-time sector check.