I am building a partial backup feature for cross datasource backups between partners on an engineering project. I have a limited window of time to get the data across daily and I need to filter out any file that is more that 1GB to prevent the whole thing to capture all the ressources. I have a larger window over the weekend to pursue these files. The core of the backup is getting the files using Get-PWDocumentsBySearchWithReturnColumns and I use the -FileUpdatedAfter parameter to get only the most recent files.
The object retrieved contains a value for FileSize, but I have found that it is unreliable. When I say unreliable, I mean that a lot of documents get reported with a FileSize of zero (which is false); others get a value, but when compared to other FileSizes, the numbers seem all over the place, some larger documents are reported as smaller and vice versa.
Here is a short example of 3 returned values and the corresponding values in the UI:
It is also missing the FilePath so it pushed me to use the object again through Get-PWDocumentsbyGUIDs and now I get a proper FileSize and a useable FilePath as well.
Can this cmdlet be improved to get the actual FileSize value directly? And maybe get the other attributes as well. It seems to run pretty quick as it is and would be more useable with more metadata.
Or maybe I'm not using it correctly, I would appreciate any hints you may have.
The file size is in bytes. You can manipulate the value to display it using the units you want. Divide the value by 1KB, 1MB, 1GB to get the units you are looking for.