There were some "serializing to json strings" issues. They should be
fixed now... much more careful whitelist and cleaning of the video
"tags" metadata out of gstreamer.
This commit sponsored by Aimee Sullivan. Thanks!
The problem is:
Collection.query.filter_by(id=X, ...)
1. X = form.collection.data
This works nicely for the completely empty form (X = None).
It does not work for a selected collection, because X
will be the collection, not its id.
2. X = request.form.get('collection') (old code).
This one works mostly, except for the completely empty
form on postgres, because in this case X = u"__None" and
postgres does not like comparing an integer column with
a string.
Fix:
collection = form.collection.data
if collection and collection.creator != request.user.id:
collection = None
"vp8 video" is what vp8 is marked as in gstreamer's metadata.
However, the browser expects it just as the name "vp8". So fixing
that.
This commit sponsored by Tyng-Ruey Chuang. Thank you!
That's true; I'm not sure what it's fixing, but he thinks it's fixing
something. Anyway, it's correct :)
This commit sponsored by Philippe Gauthier. Thanks!
- Don't let people who aren't the authors of a collection from adding
things to it (handled by forcing the user check in the query)
- request url in case invalid collection selected fixed
- collection_item.author doesn't yet exist; removing the selection
(we might want multiple people to be able to edit a collection in
the future but that future does not yet exist; as Elrond said,
remove this "false hope")
Thanks to Elrond to pointing out these issues.
And thanks to David Kindler for sponsoring this commit!
I think this is legacy code from get_display_media being a utility, or
something. Removed! (Thanks for pointing this out, Elrond!)
This commit sponsored by Tristan Chambers. Thank you!
The copy_locally and copy_local_to_storage (very inconsistent terms BTW)
were simply slurping in everything in RAM and writing it out at once.
(the copy_locally was actually memory efficient if the remote system was local)
Use shutil.copyfileobj which does chunked reads/writes on file objects.
The default buffer size is 16kb, and as each chunk means a separate HTTP
request for e.g. cloudfiles, we use a chunksize of 4MB here (which has
just been arbitrarily set by me without tests).
This should help with the failure to upload large files issue #419.
The reason for this is to avoid defining this twice as we were
previously (once in the template, once in video/models.py)
This commit sponsored by Roland McIntosh. Thank you!
It's kind of awkward because it relies on there being a entry.media_data,
but that's not guaranteed... (see http://issues.mediagoblin.org/ticket/650)
so we use a dopey fallback in the template in that case (kind of
annoying info duplication).
This commit sponsored by Piotr Wieczorek. Thank you!
Of course, the version that appears here is not really dangerous
because it's for the "call the file individually" form of debugging,
but it isn't allowed anyway.
This commit sponsored by Michael Faryniarz. Thanks!
This comes in several parts:
- Store the metadata from gstreamer during processing
- Add a new JSONEncoded field to the VideoData table
- And, of course, add a migration for that field!
This commit sponsored by Julius Tuomisto. Thank you, Julius!
- Update get_display_media in several ways:
- now uses the media type's own declaration of the order of things
- returns both the media_size and the media_path, as per the docstring
- implicitly uses self.media_files as opposed to forcing you to pass it in
- update videos to use get_display_media
- update images to declare media_fetch_order in the media manager (videos also)
- update stl to use media.media_files['original'] instead of weird
use of get_display_media
- update sidebar to only conditionally show webm_640
TODO still: identify video type information *during* processing, show
that in the <video><source /></video> element.
This commit sponsored by Nathan Yergler. Thanks, nyergler!
Check for CELERY_CONFIG_MODULE before we import raven.contrib.celery. It
seems that the import otherwise sets up the celery client before we get
to pass it our mediagoblin-specific settings.
It seems that (our implementation of) cloudfiles.write() takes
all existing data and appends write(data) to it, sending the
full monty over the wire everytime. This would of course
absolutely kill chunked writes with some O(1^n) performance
and bandwidth usage. So, override this method and use the
Cloudfile's "send" interface instead.
Also make the Cloudfile file wrapper an iterator that allows us to
simply do "for data in cloudfile:" which will stream the data in a
memory-efficient way.
DO NOTE THAT THIS PATCH IS COMPLETELY UNTESTED DUE TO LACK OF SETUP
PLEASE REVIEW AND VERIFY.
Removed the translation marking and passed in empty strings to avoid
WTForms automagically creating the labels from the field names (i.e.
client_id => 'Client Id').
instead of rendering it by hand, use the normal field
rendering tools.
Old:
[choose box] License preference
New:
License preference
[choose box]
This will be your default license on upload forms.