There were some "serializing to json strings" issues. They should be
fixed now... much more careful whitelist and cleaning of the video
"tags" metadata out of gstreamer.
This commit sponsored by Aimee Sullivan. Thanks!
"vp8 video" is what vp8 is marked as in gstreamer's metadata.
However, the browser expects it just as the name "vp8". So fixing
that.
This commit sponsored by Tyng-Ruey Chuang. Thank you!
The reason for this is to avoid defining this twice as we were
previously (once in the template, once in video/models.py)
This commit sponsored by Roland McIntosh. Thank you!
It's kind of awkward because it relies on there being a entry.media_data,
but that's not guaranteed... (see http://issues.mediagoblin.org/ticket/650)
so we use a dopey fallback in the template in that case (kind of
annoying info duplication).
This commit sponsored by Piotr Wieczorek. Thank you!
Of course, the version that appears here is not really dangerous
because it's for the "call the file individually" form of debugging,
but it isn't allowed anyway.
This commit sponsored by Michael Faryniarz. Thanks!
This comes in several parts:
- Store the metadata from gstreamer during processing
- Add a new JSONEncoded field to the VideoData table
- And, of course, add a migration for that field!
This commit sponsored by Julius Tuomisto. Thank you, Julius!
- Update get_display_media in several ways:
- now uses the media type's own declaration of the order of things
- returns both the media_size and the media_path, as per the docstring
- implicitly uses self.media_files as opposed to forcing you to pass it in
- update videos to use get_display_media
- update images to declare media_fetch_order in the media manager (videos also)
- update stl to use media.media_files['original'] instead of weird
use of get_display_media
- update sidebar to only conditionally show webm_640
TODO still: identify video type information *during* processing, show
that in the <video><source /></video> element.
This commit sponsored by Nathan Yergler. Thanks, nyergler!
There's no reason to copy it over to 'webm_640' in such a case,
clearly.
Added logic so we don't do it twice either.
Haven't tested this yet though ;)
This commit sponsored by Algot Runeman. Thank you!
The idea is to have a class that has the knowledge of the
currently being processed media and also has tools for
that.
The long term idea is to make reprocessing easier by for
example hiding the way the original comes into the
processing code.
For all our media_types, let the backref on the media_entry
be a scalar (there is only one media_data per media_entry)
instead of a list with zero or one entry.
The media_data toolchain on MediaEntry currently handles
both transparently.
We were reading the complete "medium" "thumbnail" and "original"
in RAM via dst.write(src.read()). Just call the appropriate storage
methods copy_local_to_storage which are responsible for streaming
local files efficiently.
The efficiency of this patch depends on the separate branch that
actually implements chunked copying for Storage().copy_local_to_storage()
This makes workbench getting more convenient by eliminating some
boilerplate and more robust by cleaning the workbench up even if processing
ends with an Exception.
Finally, this fixes the bugs in the ascii and video backends to never call
workbench.destroy, so those workbenches were never cleaned up.
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
We were using lots of tempfiles in the audio and video processing
backends which worked around our workbench system. Still use the
tempfiles package but create them in the workbench directory. This
can help address the uploads of large files (#419) where /tmp might
be a smallish tmpfs and our workbench a real disk.
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
We copy uploaded media from the queue store to the local workbench
and then to its final destination. The latter was done by simply:
dst.write(src.read()) which is of course evil as it reads the whole
file content into RAM. Which *might* arguably still be OK for
images, but you never know.
Make use of the provided storage() methods that offer chunked copying
rather than opening and fudging with files ourselves.
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
This concludes the db.sql.* -> db.* move. Our db abstraction layer is
sqlalchemy, so there is no need to a separate db.sql.* hierarchy.
All tests have been run for each of the commit series to make sure
everything works at every step.
We need to know the name of the backref, so that we can
access it by name on the MediaEntry. We might be able to
get this name by inspection, but this way is easier, for
now.
De-noisify the transcoding log and db updates. Previously we would log
and save the progress percentage every second, even if it had not changed
at all. Save progress:oercentage in the Transcoder and only log/update
when the percentage has actually changed.
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
In all cases where get_media_manager(_media_type_as_string) was called in
our code base we ultimately passed in a "MediaEntry().media_type" to get
the matching MEDIA_MANAGER. It so makes sense to make this a function of
the MediaEntry rather than a global function in mediagoblin.media_types and
passing around media_entry.media_type as arguments all the time.
It saves a few import statements and arguments. I also made it so the
Media_manager property is cached for subsequent calls, although I am not too
sure that this is needed (there are other cases for which this would make
more sense)
Also add a get_media_manager test to the media submission tests. It submits
an image and checks that both media.media_type and media.media_manager
return the right thing. Not sure if these tests could not be merged with an
existing submission test, but it can't hurt to have things explicit.
TODO: Right now we iterate through all existing media_managers to find the
right one based on the string of its module name. This should be made a simple
dict lookup to avoid all the extra work.
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
Previously the .blend and .py files had to be in the same directory
mediagoblin/celery launched from. This is now fixed so it pulls them
out of the package proper.
So far templates required a very complex blurb to simply insert a
thumbnail URL, exposing much of the internal logic to the template
designer. In addition, we would fail with an error if for some
reason the media_files['thumb'] entry was never populated.
This adds the MediaEntry.thumb_url property that template designers
can simply use. It will do the right thing, either fetching the proper
thumbnail or hand back a generic icon specified in a media's
MEDIA_MANAGER as "default_thumb".
Add an image default fallback icon (stolen from Tangos, which are
Public Domain since version 0.8.90 as I understand) since the one
we referred to was not existing. Perhaps, a "broken image" icon
would be better, but I'll leave that to our capable designers.
All templates have been modified to make use of the new thumb_url
function.
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
While creating the spectrogram, and alternative version of the audio
file has been needed. Before this, it has been a WAV format file, the
issue with WAV is that it takes a lot of space. Starting with this it
will be an OGG file.
Rejoice :)