The copy_locally and copy_local_to_storage (very inconsistent terms BTW)
were simply slurping in everything in RAM and writing it out at once.
(the copy_locally was actually memory efficient if the remote system was local)
Use shutil.copyfileobj which does chunked reads/writes on file objects.
The default buffer size is 16kb, and as each chunk means a separate HTTP
request for e.g. cloudfiles, we use a chunksize of 4MB here (which has
just been arbitrarily set by me without tests).
This should help with the failure to upload large files issue #419.
It seems that (our implementation of) cloudfiles.write() takes
all existing data and appends write(data) to it, sending the
full monty over the wire everytime. This would of course
absolutely kill chunked writes with some O(1^n) performance
and bandwidth usage. So, override this method and use the
Cloudfile's "send" interface instead.
Also make the Cloudfile file wrapper an iterator that allows us to
simply do "for data in cloudfile:" which will stream the data in a
memory-efficient way.
DO NOTE THAT THIS PATCH IS COMPLETELY UNTESTED DUE TO LACK OF SETUP
PLEASE REVIEW AND VERIFY.
* Removed trailing whitespace
* Line length < 80 where possible
* Honor conventions on number of blank lines
* Honor conventions about spaces around :, =
* DONE Initially testing with arista
** DONE Video display templates
*** TODO Multi-browser support
** TODO Video thumbnails
** TODO Link to original video
** TODO Video cropping
Also contains a lot of "debug" print's
* Removed storage.py
* Created submodules for filestorage, cloudfiles, mountstorage
* Changed test_storage to reflect the changes made in the storage
module structure
* Added mediagoblin.storage.filestorage.BasicFileStorage as a
default for both publicstore and queuestore's `storage_class`