How is this even possible
How is this even possible
How is this even possible?
This must be a glitch in the Matrix. Right? Let me explain.
I have some automation that converts a qcow2 image into a raw image before uploading the resulting raw image to an S3 bucket (in AWS). Prior to Friday, this automation was specifically written for RHEL qcow2 images, but due to new requirements, the automation has been adapted to handle BIGIP qcow2 images.
-rw-r—— 1 d staff 81G Aug 7 01:52 f5-bigip-17.0u4.x86_64.raw
Now, if written on a filesystem that had adequate storage space to accommodate a file of this size, I wouldn’t’ve thought anything of it. However, before this file was created, there was only
25GB of space available on the filesystem:
Now what lines-up, is that after the raw image was created, and despite what ls was showing the size of the raw image to be, df showed the filesystem now has
. which consequently is what the disk size is for the source qcow2 image:
Now up to this point, I could reasonably convince myself of how such a discrepancy could exist. The only problem is, since the automation started uploading the raw image to the S3 bucket, (at the time of this posting) it has uploaded more than 55GB of the image:
Some technical specs about the systems on which this is executing:
Host System: Macbook Pro ( / is on an encrypted APFS volume)
Guest System: RHEL 7.4 (via VMware Fusion v10)
The qcow2 and raw images exist on the host system in the Downloads directory, which is shared with/accessible from the guest system.
Unless there’s some type of compression and/or dedup going on the host system’s filesystem where these files live, how is it possible there is an 81GB file written to a filesystem that only had
25GB available, and from which, more than 55GB has already been copied up to an S3 bucket?
Thanks to all for your help answering this question.