Don't read full image media into RAM on copying (#419)

We copy uploaded media from the queue store to the local workbench
and then to its final destination. The latter was done by simply:
dst.write(src.read()) which is of course evil as it reads the whole
file content into RAM. Which *might* arguably still be OK for
images, but you never know.

Make use of the provided storage() methods that offer chunked copying
rather than opening and fudging with files ourselves.

Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
This commit is contained in:
Sebastian Spaeth 2012-12-19 14:18:03 +01:00
parent 7f4e42b0b1
commit 5018a3557c

View File

@ -120,17 +120,10 @@ def process_image(entry):
else: else:
medium_filepath = None medium_filepath = None
# we have to re-read because unlike PIL, not everything reads # Copy our queued local workbench to its final destination
# things in string representation :) original_filepath = create_pub_filepath(
queued_file = file(queued_filename, 'rb')
with queued_file:
original_filepath = create_pub_filepath(
entry, name_builder.fill('{basename}{ext}')) entry, name_builder.fill('{basename}{ext}'))
mgg.public_store.copy_local_to_storage(queued_filename, original_filepath)
with mgg.public_store.get_file(original_filepath, 'wb') \
as original_file:
original_file.write(queued_file.read())
# Remove queued media file from storage and database # Remove queued media file from storage and database
mgg.queue_store.delete_file(queued_filepath) mgg.queue_store.delete_file(queued_filepath)