Merge branch 'master' into OPW-Moderation-Update

Conflicts:
	mediagoblin/db/migrations.py
This commit is contained in:
tilly-Q 2013-09-12 18:58:04 -04:00
commit 045fe0ee9d
131 changed files with 23317 additions and 7306 deletions

View File

@ -4,5 +4,5 @@ source_file = mediagoblin/i18n/en/LC_MESSAGES/mediagoblin.po
source_lang = en source_lang = en
[main] [main]
host = https://www.transifex.net host = https://transifex.com

12
AUTHORS
View File

@ -18,11 +18,13 @@ Thank you!
* András Veres-Szentkirályi * András Veres-Szentkirályi
* Bassam Kurdali * Bassam Kurdali
* Bernhard Keller * Bernhard Keller
* Brandon Invergo
* Brett Smith * Brett Smith
* Caleb Forbes Davis V * Caleb Forbes Davis V
* Corey Farwell * Corey Farwell
* Chris Moylan * Chris Moylan
* Christopher Allan Webber * Christopher Allan Webber
* Dan Callahan
* David Thompson * David Thompson
* Daniel Neel * Daniel Neel
* Deb Nicholson * Deb Nicholson
@ -41,11 +43,14 @@ Thank you!
* Joar Wandborg * Joar Wandborg
* Jorge Araya Navarro * Jorge Araya Navarro
* Karen Rustad * Karen Rustad
* Kenneth Dombrowski
* Kushal Kumaran
* Kuno Woudt * Kuno Woudt
* Laura Arjona * Laura Arjona
* Larisa Hoffenbecker * Larisa Hoffenbecker
* Luke Slater * Luke Slater
* Manuel Urbano Santos * Manuel Urbano Santos
* Marcel van der Boom
* Mark Holmquist * Mark Holmquist
* Mats Sjöberg * Mats Sjöberg
* Matt Lee * Matt Lee
@ -61,20 +66,25 @@ Thank you!
* Rodney Ewing * Rodney Ewing
* Runar Petursson * Runar Petursson
* Sacha De'Angeli * Sacha De'Angeli
* Sam Clegg
* Sam Kleinman * Sam Kleinman
* Sam Tuke * Sam Tuke
* Sebastian Spaeth * Sebastian Spaeth
* Shawn Khan * Shawn Khan
* Simon Fondrie-Teitler * Simon Fondrie-Teitler
* Stefano Zacchiroli * Stefano Zacchiroli
* sturm
* Tiberiu C. Turbureanu * Tiberiu C. Turbureanu
* Tran Thanh Bao * Tran Thanh Bao
* Tryggvi Björgvinsson * Tryggvi Björgvinsson
* Shawn Khan * Shawn Khan
* Will Kahn-Greene * Will Kahn-Greene
If you think your name should be on this list, let us know! Special thanks to:
* Starblessed of viewskew (lending server space!)
If you think your name should be on this list, let us know!
We also are currently borrowing an image in We also are currently borrowing an image in
mediagoblin/static/images/media_thumbs/image.png from the wonderful mediagoblin/static/images/media_thumbs/image.png from the wonderful

View File

@ -2,7 +2,7 @@
# #
# You can set these variables from the command line. # You can set these variables from the command line.
SPHINXOPTS = -W SPHINXOPTS =
SPHINXBUILD = sphinx-build SPHINXBUILD = sphinx-build
PAPER = PAPER =
BUILDDIR = build BUILDDIR = build

View File

@ -49,7 +49,7 @@ redirect_uri
Response Response
-------- --------
You will get back a response:: You will get back a response:
client_id client_id
This identifies a client This identifies a client

View File

@ -56,7 +56,6 @@ Part 2: Core plugin documentation
plugindocs/flatpagesfile plugindocs/flatpagesfile
plugindocs/sampleplugin plugindocs/sampleplugin
plugindocs/oauth
plugindocs/trim_whitespace plugindocs/trim_whitespace
plugindocs/raven plugindocs/raven
plugindocs/basic_auth plugindocs/basic_auth

View File

@ -203,18 +203,20 @@ Clone the MediaGoblin repository and set up the git submodules::
cd mediagoblin cd mediagoblin
git submodule init && git submodule update git submodule init && git submodule update
Set up the in-package virtualenv via make::
./bootstrap.sh && ./configure && make And set up the in-package virtualenv::
(virtualenv --system-site-packages . || virtualenv .) && ./bin/python setup.py develop
.. note:: .. note::
Prefer not to use make, or want to use the "old way" of installing We presently have an experimental make-style deployment system. if
MediaGoblin (maybe you know how to use virtualenv and python you'd like to try it, instead of the above command, you can run::
packaging)? You still can! All that the above make script is doing
is installing an in-package virtualenv and running
./bin/python setup.py develop ./bootstrap.sh && ./configure && make
This also includes a number of nice features, such as keeping your
viratualenv up to date by simply running `make update`.
.. :: .. ::

View File

@ -22,6 +22,74 @@ If you're upgrading from a previous release, please read it
carefully, or at least skim over it. carefully, or at least skim over it.
0.5.0
=====
**NOTE:** If using the API is important to you, we're in a state of
ransition towards a new API via the Pump API. As such, though the old
API still probably works, some changes have happened to the way oauth
works to make it more Pump-compatible. If you're heavily using
clients using the old API, you may wish to hold off on upgrading for
now. Otherwise, jump in and have fun! :)
**Do this to upgrade**
1. Make sure to run
``./bin/python setup.py develop --upgrade && ./bin/gmg dbupdate``
after upgrading.
.. mention something about new, experimental configure && make support
2. Note that a couple of things have changed with ``mediagoblin.ini``. First
we have a new Authentication System. You need to add
``[[mediagoblin.plugins.basic_auth]]`` under the ``[plugins]`` section of
your config file. Second, media types are now plugins, so you need to add
each media type under the ``[plugins]`` section of your config file.
3. We have made a script to transition your ``mediagoblin_local.ini`` file for
you. This script can be found at
.. add a link to the script
If you run into problems, don't hesitate to
`contact us <http://mediagoblin.org/pages/join.html>`_
(IRC is often best).
**New features**
* As mentioned above, we now have a plugable Authentication system. You can
use any combination of the multiple authentication systems
(:ref:`basic_auth-chapter`, :ref:`persona-chapter`, :ref:`openid-chapter`)
or write your own!
* Media types are now plugins! This means that new media types will
be able to do new, fancy things they couldn't in the future.
* We now have notification support! This allows you to subscribe to media
comments and to be notified when someone comments on your media.
* New reprocessing framework! You can now reprocess failed uploads, and
send already processed media back to processing to re-transcode or resize
media.
* Comment preview!
* Users now have the ability to change their email associated with their
account.
* New oauth code as we move closer to federation support.
* Experimental pyconfigure support for GNU-style configue and makefile
deployment.
* Database foundations! You can now pre-populate the database models.
* Way faster unit test run-time via in-memory database.
* All mongokit stuff has been cleaned up.
* Fixes for non-ascii filenames.
* The option to stay logged in.
* Mediagoblin has been upgraded to use the latest `celery <http://celeryproject.org/>`_
version.
* You can now add jinja2 extensions to your config file to use in custom
templates.
* Fixed video permission issues.
* Mediagoblin docs are now hosted with multiple versions.
* We removed redundent tooltips from the STL media display.
* We are now using itsdangerous for verification tokens.
0.4.1 0.4.1
===== =====
@ -80,6 +148,7 @@ please note the following:
**New features** **New features**
* PDF media type! * PDF media type!
* Improved plugin system. More flexible, better documented, with a * Improved plugin system. More flexible, better documented, with a
new plugin authoring section of the docs. new plugin authoring section of the docs.

View File

@ -23,4 +23,4 @@
# see http://www.python.org/dev/peps/pep-0386/ # see http://www.python.org/dev/peps/pep-0386/
__version__ = "0.5.0.dev" __version__ = "0.6.0.dev"

View File

@ -341,7 +341,7 @@ def verify_forgot_password(request):
messages.add_message( messages.add_message(
request, messages.ERROR, request, messages.ERROR,
_('You are no longer an active user. Please contact the system' _('You are no longer an active user. Please contact the system'
' admin to reactivate your accoutn.')) ' admin to reactivate your account.'))
return redirect( return redirect(
request, 'index') request, 'index')

View File

@ -104,47 +104,6 @@ max_height = integer(default=640)
max_width = integer(default=180) max_width = integer(default=180)
max_height = integer(default=180) max_height = integer(default=180)
[media_type:mediagoblin.media_types.image]
# One of BICUBIC, BILINEAR, NEAREST, ANTIALIAS
resize_filter = string(default="ANTIALIAS")
#level of compression used when resizing images
quality = integer(default=90)
[media_type:mediagoblin.media_types.video]
# Should we keep the original file?
keep_original = boolean(default=False)
# 0 means autodetect, autodetect means number_of_CPUs - 1
vp8_threads = integer(default=0)
# Range: 0..10
vp8_quality = integer(default=8)
# Range: -0.1..1
vorbis_quality = float(default=0.3)
# Autoplay the video when page is loaded?
auto_play = boolean(default=False)
[[skip_transcode]]
mime_types = string_list(default=list("video/webm"))
container_formats = string_list(default=list("Matroska"))
video_codecs = string_list(default=list("VP8 video"))
audio_codecs = string_list(default=list("Vorbis"))
dimensions_match = boolean(default=True)
[media_type:mediagoblin.media_types.audio]
keep_original = boolean(default=True)
# vorbisenc quality
quality = float(default=0.3)
create_spectrogram = boolean(default=True)
spectrogram_fft_size = integer(default=4096)
[media_type:mediagoblin.media_types.ascii]
thumbnail_font = string(default=None)
[media_type:mediagoblin.media_types.pdf]
pdf_js = boolean(default=True)
[celery] [celery]
# default result stuff # default result stuff
CELERY_RESULT_BACKEND = string(default="database") CELERY_RESULT_BACKEND = string(default="database")

View File

@ -301,7 +301,6 @@ def drop_token_related_User_columns(db):
metadata = MetaData(bind=db.bind) metadata = MetaData(bind=db.bind)
user_table = inspect_table(metadata, 'core__users') user_table = inspect_table(metadata, 'core__users')
verification_key = user_table.columns['verification_key'] verification_key = user_table.columns['verification_key']
fp_verification_key = user_table.columns['fp_verification_key'] fp_verification_key = user_table.columns['fp_verification_key']
fp_token_expire = user_table.columns['fp_token_expire'] fp_token_expire = user_table.columns['fp_token_expire']
@ -323,7 +322,6 @@ class CommentSubscription_v0(declarative_base()):
user_id = Column(Integer, ForeignKey(User.id), nullable=False) user_id = Column(Integer, ForeignKey(User.id), nullable=False)
notify = Column(Boolean, nullable=False, default=True) notify = Column(Boolean, nullable=False, default=True)
send_email = Column(Boolean, nullable=False, default=True) send_email = Column(Boolean, nullable=False, default=True)
@ -369,6 +367,8 @@ def add_new_notification_tables(db):
CommentNotification_v0.__table__.create(db.bind) CommentNotification_v0.__table__.create(db.bind)
ProcessingNotification_v0.__table__.create(db.bind) ProcessingNotification_v0.__table__.create(db.bind)
db.commit()
@RegisterMigration(13, MIGRATIONS) @RegisterMigration(13, MIGRATIONS)
def pw_hash_nullable(db): def pw_hash_nullable(db):
@ -384,6 +384,9 @@ def pw_hash_nullable(db):
constraint = UniqueConstraint('username', table=user_table) constraint = UniqueConstraint('username', table=user_table)
constraint.create() constraint.create()
db.commit()
# oauth1 migrations # oauth1 migrations
class Client_v0(declarative_base()): class Client_v0(declarative_base()):
""" """
@ -462,6 +465,16 @@ def create_oauth1_tables(db):
db.commit() db.commit()
@RegisterMigration(15, MIGRATIONS)
def wants_notifications(db):
"""Add a wants_notifications field to User model"""
metadata = MetaData(bind=db.bind)
user_table = inspect_table(metadata, "core__users")
col = Column('wants_notifications', Boolean, default=True)
col.create(user_table)
db.commit()
class ReportBase_v0(declarative_base()): class ReportBase_v0(declarative_base()):
__tablename__ = 'core__reports' __tablename__ = 'core__reports'
id = Column(Integer, primary_key=True) id = Column(Integer, primary_key=True)
@ -483,6 +496,8 @@ class CommentReport_v0(ReportBase_v0):
primary_key=True) primary_key=True)
comment_id = Column(Integer, ForeignKey(MediaComment.id), nullable=False) comment_id = Column(Integer, ForeignKey(MediaComment.id), nullable=False)
class MediaReport_v0(ReportBase_v0): class MediaReport_v0(ReportBase_v0):
__tablename__ = 'core__reports_on_media' __tablename__ = 'core__reports_on_media'
__mapper_args__ = {'polymorphic_identity': 'media_report'} __mapper_args__ = {'polymorphic_identity': 'media_report'}
@ -515,7 +530,7 @@ class PrivilegeUserAssociation_v0(declarative_base()):
ForeignKey(Privilege.id), ForeignKey(Privilege.id),
primary_key=True) primary_key=True)
@RegisterMigration(15, MIGRATIONS) @RegisterMigration(16, MIGRATIONS)
def create_moderation_tables(db): def create_moderation_tables(db):
ReportBase_v0.__table__.create(db.bind) ReportBase_v0.__table__.create(db.bind)
CommentReport_v0.__table__.create(db.bind) CommentReport_v0.__table__.create(db.bind)
@ -531,7 +546,7 @@ def create_moderation_tables(db):
p.save() p.save()
@RegisterMigration(16, MIGRATIONS) @RegisterMigration(17, MIGRATIONS)
def update_user_privilege_columns(db): def update_user_privilege_columns(db):
# first, create the privileges which would be created by foundations # first, create the privileges which would be created by foundations
default_privileges = Privilege.query.filter( default_privileges = Privilege.query.filter(

View File

@ -70,6 +70,7 @@ class User(Base, UserMixin):
# Intented to be nullable=False, but migrations would not work for it # Intented to be nullable=False, but migrations would not work for it
# set to nullable=True implicitly. # set to nullable=True implicitly.
wants_comment_notification = Column(Boolean, default=True) wants_comment_notification = Column(Boolean, default=True)
wants_notifications = Column(Boolean, default=True)
license_preference = Column(Unicode) license_preference = Column(Unicode)
#--column admin is VESTIGIAL with privileges and should not be used------------ #--column admin is VESTIGIAL with privileges and should not be used------------
#--should be dropped ASAP though a bug in sqlite3 prevents this atm------------ #--should be dropped ASAP though a bug in sqlite3 prevents this atm------------

View File

@ -61,12 +61,10 @@ class EditProfileForm(wtforms.Form):
class EditAccountForm(wtforms.Form): class EditAccountForm(wtforms.Form):
new_email = wtforms.TextField(
_('New email address'),
[wtforms.validators.Optional(),
normalize_user_or_email_field(allow_user=False)])
wants_comment_notification = wtforms.BooleanField( wants_comment_notification = wtforms.BooleanField(
description=_("Email me when others comment on my media")) description=_("Email me when others comment on my media"))
wants_notifications = wtforms.BooleanField(
description=_("Enable insite notifications about events."))
license_preference = wtforms.SelectField( license_preference = wtforms.SelectField(
_('License preference'), _('License preference'),
[ [
@ -111,3 +109,15 @@ class ChangePassForm(wtforms.Form):
[wtforms.validators.Required(), [wtforms.validators.Required(),
wtforms.validators.Length(min=6, max=30)], wtforms.validators.Length(min=6, max=30)],
id="password") id="password")
class ChangeEmailForm(wtforms.Form):
new_email = wtforms.TextField(
_('New email address'),
[wtforms.validators.Required(),
normalize_user_or_email_field(allow_user=False)])
password = wtforms.PasswordField(
_('Password'),
[wtforms.validators.Required()],
description=_(
"Enter your password to prove you own this account."))

View File

@ -28,3 +28,5 @@ add_route('mediagoblin.edit.pass', '/edit/password/',
'mediagoblin.edit.views:change_pass') 'mediagoblin.edit.views:change_pass')
add_route('mediagoblin.edit.verify_email', '/edit/verify_email/', add_route('mediagoblin.edit.verify_email', '/edit/verify_email/',
'mediagoblin.edit.views:verify_email') 'mediagoblin.edit.views:verify_email')
add_route('mediagoblin.edit.email', '/edit/email/',
'mediagoblin.edit.views:change_email')

View File

@ -228,24 +228,22 @@ def edit_account(request):
user = request.user user = request.user
form = forms.EditAccountForm(request.form, form = forms.EditAccountForm(request.form,
wants_comment_notification=user.wants_comment_notification, wants_comment_notification=user.wants_comment_notification,
license_preference=user.license_preference) license_preference=user.license_preference,
wants_notifications=user.wants_notifications)
if request.method == 'POST' and form.validate(): if request.method == 'POST' and form.validate():
user.wants_comment_notification = form.wants_comment_notification.data user.wants_comment_notification = form.wants_comment_notification.data
user.wants_notifications = form.wants_notifications.data
user.license_preference = form.license_preference.data user.license_preference = form.license_preference.data
if form.new_email.data: user.save()
_update_email(request, form, user) messages.add_message(request,
messages.SUCCESS,
if not form.errors: _("Account settings saved"))
user.save() return redirect(request,
messages.add_message(request, 'mediagoblin.user_pages.user_home',
messages.SUCCESS, user=user.username)
_("Account settings saved"))
return redirect(request,
'mediagoblin.user_pages.user_home',
user=user.username)
return render_to_response( return render_to_response(
request, request,
@ -425,30 +423,52 @@ def verify_email(request):
user=user.username) user=user.username)
def _update_email(request, form, user): def change_email(request):
new_email = form.new_email.data """ View to change the user's email """
users_with_email = User.query.filter_by( form = forms.ChangeEmailForm(request.form)
email=new_email).count() user = request.user
if users_with_email: # If no password authentication, no need to enter a password
form.new_email.errors.append( if 'pass_auth' not in request.template_env.globals or not user.pw_hash:
_('Sorry, a user with that email address' form.__delitem__('password')
' already exists.'))
elif not users_with_email: if request.method == 'POST' and form.validate():
verification_key = get_timed_signer_url( new_email = form.new_email.data
'mail_verification_token').dumps({ users_with_email = User.query.filter_by(
'user': user.id, email=new_email).count()
'email': new_email})
rendered_email = render_template( if users_with_email:
request, 'mediagoblin/edit/verification.txt', form.new_email.errors.append(
{'username': user.username, _('Sorry, a user with that email address'
'verification_url': EMAIL_VERIFICATION_TEMPLATE.format( ' already exists.'))
uri=request.urlgen('mediagoblin.edit.verify_email',
qualified=True),
verification_key=verification_key)})
email_debug_message(request) if form.password and user.pw_hash and not auth.check_password(
auth_tools.send_verification_email(user, request, new_email, form.password.data, user.pw_hash):
rendered_email) form.password.errors.append(
_('Wrong password'))
if not form.errors:
verification_key = get_timed_signer_url(
'mail_verification_token').dumps({
'user': user.id,
'email': new_email})
rendered_email = render_template(
request, 'mediagoblin/edit/verification.txt',
{'username': user.username,
'verification_url': EMAIL_VERIFICATION_TEMPLATE.format(
uri=request.urlgen('mediagoblin.edit.verify_email',
qualified=True),
verification_key=verification_key)})
email_debug_message(request)
auth_tools.send_verification_email(user, request, new_email,
rendered_email)
return redirect(request, 'mediagoblin.edit.account')
return render_to_response(
request,
'mediagoblin/edit/change_email.html',
{'form': form,
'user': user})

View File

@ -45,6 +45,10 @@ SUBCOMMAND_MAP = {
'setup': 'mediagoblin.gmg_commands.assetlink:assetlink_parser_setup', 'setup': 'mediagoblin.gmg_commands.assetlink:assetlink_parser_setup',
'func': 'mediagoblin.gmg_commands.assetlink:assetlink', 'func': 'mediagoblin.gmg_commands.assetlink:assetlink',
'help': 'Link assets for themes and plugins for static serving'}, 'help': 'Link assets for themes and plugins for static serving'},
'reprocess': {
'setup': 'mediagoblin.gmg_commands.reprocess:reprocess_parser_setup',
'func': 'mediagoblin.gmg_commands.reprocess:reprocess',
'help': 'Reprocess media entries'},
# 'theme': { # 'theme': {
# 'setup': 'mediagoblin.gmg_commands.theme:theme_parser_setup', # 'setup': 'mediagoblin.gmg_commands.theme:theme_parser_setup',
# 'func': 'mediagoblin.gmg_commands.theme:theme', # 'func': 'mediagoblin.gmg_commands.theme:theme',

View File

@ -16,6 +16,7 @@
from mediagoblin import mg_globals from mediagoblin import mg_globals
from mediagoblin.db.open import setup_connection_and_db_from_config from mediagoblin.db.open import setup_connection_and_db_from_config
from mediagoblin.gmg_commands import util as commands_util
from mediagoblin.storage.filestorage import BasicFileStorage from mediagoblin.storage.filestorage import BasicFileStorage
from mediagoblin.init import setup_storage, setup_global_and_app_config from mediagoblin.init import setup_storage, setup_global_and_app_config
@ -223,6 +224,7 @@ def env_export(args):
''' '''
Export database and media files to a tar archive Export database and media files to a tar archive
''' '''
commands_util.check_unrecognized_args(args)
if args.cache_path: if args.cache_path:
if os.path.exists(args.cache_path): if os.path.exists(args.cache_path):
_log.error('The cache directory must not exist ' _log.error('The cache directory must not exist '

View File

@ -0,0 +1,302 @@
# GNU MediaGoblin -- federated, autonomous media hosting
# Copyright (C) 2011, 2012 MediaGoblin contributors. See AUTHORS.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import argparse
import os
from mediagoblin import mg_globals
from mediagoblin.db.models import MediaEntry
from mediagoblin.gmg_commands import util as commands_util
from mediagoblin.submit.lib import run_process_media
from mediagoblin.tools.translate import lazy_pass_to_ugettext as _
from mediagoblin.tools.pluginapi import hook_handle
from mediagoblin.processing import (
ProcessorDoesNotExist, ProcessorNotEligible,
get_entry_and_processing_manager, get_processing_manager_for_type,
ProcessingManagerDoesNotExist)
def reprocess_parser_setup(subparser):
subparser.add_argument(
'--celery',
action='store_true',
help="Don't process eagerly, pass off to celery")
subparsers = subparser.add_subparsers(dest="reprocess_subcommand")
###################
# available command
###################
available_parser = subparsers.add_parser(
"available",
help="Find out what actions are available for this media")
available_parser.add_argument(
"id_or_type",
help="Media id or media type to check")
available_parser.add_argument(
"--action-help",
action="store_true",
help="List argument help for each action available")
available_parser.add_argument(
"--state",
help="The state of media you would like to reprocess")
#############
# run command
#############
run_parser = subparsers.add_parser(
"run",
help="Run a reprocessing on one or more media")
run_parser.add_argument(
'media_id',
help="The media_entry id(s) you wish to reprocess.")
run_parser.add_argument(
'reprocess_command',
help="The reprocess command you intend to run")
run_parser.add_argument(
'reprocess_args',
nargs=argparse.REMAINDER,
help="rest of arguments to the reprocessing tool")
################
# thumbs command
################
thumbs = subparsers.add_parser(
'thumbs',
help='Regenerate thumbs for all processed media')
thumbs.add_argument(
'--size',
nargs=2,
type=int,
metavar=('max_width', 'max_height'))
#################
# initial command
#################
subparsers.add_parser(
'initial',
help='Reprocess all failed media')
##################
# bulk_run command
##################
bulk_run_parser = subparsers.add_parser(
'bulk_run',
help='Run reprocessing on a given media type or state')
bulk_run_parser.add_argument(
'type',
help='The type of media you would like to process')
bulk_run_parser.add_argument(
'--state',
default='processed',
nargs='?',
help='The state of the media you would like to process. Defaults to' \
" 'processed'")
bulk_run_parser.add_argument(
'reprocess_command',
help='The reprocess command you intend to run')
bulk_run_parser.add_argument(
'reprocess_args',
nargs=argparse.REMAINDER,
help='The rest of the arguments to the reprocessing tool')
###############
# help command?
###############
def available(args):
# Get the media type, either by looking up media id, or by specific type
try:
media_id = int(args.id_or_type)
media_entry, manager = get_entry_and_processing_manager(media_id)
media_type = media_entry.media_type
except ValueError:
media_type = args.id_or_type
media_entry = None
manager = get_processing_manager_for_type(media_type)
except ProcessingManagerDoesNotExist:
entry = MediaEntry.query.filter_by(id=args.id_or_type).first()
print 'No such processing manager for {0}'.format(entry.media_type)
if args.state:
processors = manager.list_all_processors_by_state(args.state)
elif media_entry is None:
processors = manager.list_all_processors()
else:
processors = manager.list_eligible_processors(media_entry)
print "Available processors:"
print "====================="
print ""
if args.action_help:
for processor in processors:
print processor.name
print "-" * len(processor.name)
parser = processor.generate_parser()
parser.print_help()
print ""
else:
for processor in processors:
if processor.description:
print " - %s: %s" % (processor.name, processor.description)
else:
print " - %s" % processor.name
def run(args, media_id=None):
if not media_id:
media_id = args.media_id
try:
media_entry, manager = get_entry_and_processing_manager(media_id)
# TODO: (maybe?) This could probably be handled entirely by the
# processor class...
try:
processor_class = manager.get_processor(
args.reprocess_command, media_entry)
except ProcessorDoesNotExist:
print 'No such processor "%s" for media with id "%s"' % (
args.reprocess_command, media_entry.id)
return
except ProcessorNotEligible:
print 'Processor "%s" exists but media "%s" is not eligible' % (
args.reprocess_command, media_entry.id)
return
reprocess_parser = processor_class.generate_parser()
reprocess_args = reprocess_parser.parse_args(args.reprocess_args)
reprocess_request = processor_class.args_to_request(reprocess_args)
run_process_media(
media_entry,
reprocess_action=args.reprocess_command,
reprocess_info=reprocess_request)
except ProcessingManagerDoesNotExist:
entry = MediaEntry.query.filter_by(id=media_id).first()
print 'No such processing manager for {0}'.format(entry.media_type)
def bulk_run(args):
"""
Bulk reprocessing of a given media_type
"""
query = MediaEntry.query.filter_by(media_type=args.type,
state=args.state)
for entry in query:
run(args, entry.id)
def thumbs(args):
"""
Regenerate thumbs for all processed media
"""
query = MediaEntry.query.filter_by(state='processed')
for entry in query:
try:
media_entry, manager = get_entry_and_processing_manager(entry.id)
# TODO: (maybe?) This could probably be handled entirely by the
# processor class...
try:
processor_class = manager.get_processor(
'resize', media_entry)
except ProcessorDoesNotExist:
print 'No such processor "%s" for media with id "%s"' % (
'resize', media_entry.id)
return
except ProcessorNotEligible:
print 'Processor "%s" exists but media "%s" is not eligible' % (
'resize', media_entry.id)
return
reprocess_parser = processor_class.generate_parser()
# prepare filetype and size to be passed into reprocess_parser
if args.size:
extra_args = 'thumb --{0} {1} {2}'.format(
processor_class.thumb_size,
args.size[0],
args.size[1])
else:
extra_args = 'thumb'
reprocess_args = reprocess_parser.parse_args(extra_args.split())
reprocess_request = processor_class.args_to_request(reprocess_args)
run_process_media(
media_entry,
reprocess_action='resize',
reprocess_info=reprocess_request)
except ProcessingManagerDoesNotExist:
print 'No such processing manager for {0}'.format(entry.media_type)
def initial(args):
"""
Reprocess all failed media
"""
query = MediaEntry.query.filter_by(state='failed')
for entry in query:
try:
media_entry, manager = get_entry_and_processing_manager(entry.id)
run_process_media(
media_entry,
reprocess_action='initial')
except ProcessingManagerDoesNotExist:
print 'No such processing manager for {0}'.format(entry.media_type)
def reprocess(args):
# Run eagerly unless explicetly set not to
if not args.celery:
os.environ['CELERY_ALWAYS_EAGER'] = 'true'
commands_util.setup_app(args)
if args.reprocess_subcommand == "run":
run(args)
elif args.reprocess_subcommand == "available":
available(args)
elif args.reprocess_subcommand == "bulk_run":
bulk_run(args)
elif args.reprocess_subcommand == "thumbs":
thumbs(args)
elif args.reprocess_subcommand == "initial":
initial(args)

View File

@ -36,5 +36,5 @@ def prompt_if_not_set(variable, text, password=False):
variable=raw_input(text + u' ') variable=raw_input(text + u' ')
else: else:
variable=getpass.getpass(text + u' ') variable=getpass.getpass(text + u' ')
return variable return variable

File diff suppressed because it is too large Load Diff

Binary file not shown.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Binary file not shown.

File diff suppressed because it is too large Load Diff

Binary file not shown.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -15,21 +15,15 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
from mediagoblin.media_types import MediaManagerBase from mediagoblin.media_types import MediaManagerBase
from mediagoblin.media_types.ascii.processing import process_ascii, \ from mediagoblin.media_types.ascii.processing import AsciiProcessingManager, \
sniff_handler sniff_handler
from mediagoblin.tools import pluginapi
ACCEPTED_EXTENSIONS = ["txt", "asc", "nfo"] ACCEPTED_EXTENSIONS = ["txt", "asc", "nfo"]
MEDIA_TYPE = 'mediagoblin.media_types.ascii' MEDIA_TYPE = 'mediagoblin.media_types.ascii'
def setup_plugin():
config = pluginapi.get_config(MEDIA_TYPE)
class ASCIIMediaManager(MediaManagerBase): class ASCIIMediaManager(MediaManagerBase):
human_readable = "ASCII" human_readable = "ASCII"
processor = staticmethod(process_ascii)
display_template = "mediagoblin/media_displays/ascii.html" display_template = "mediagoblin/media_displays/ascii.html"
default_thumb = "images/media_thumbs/ascii.jpg" default_thumb = "images/media_thumbs/ascii.jpg"
@ -40,8 +34,8 @@ def get_media_type_and_manager(ext):
hooks = { hooks = {
'setup': setup_plugin,
'get_media_type_and_manager': get_media_type_and_manager, 'get_media_type_and_manager': get_media_type_and_manager,
('media_manager', MEDIA_TYPE): lambda: ASCIIMediaManager, ('media_manager', MEDIA_TYPE): lambda: ASCIIMediaManager,
('reprocess_manager', MEDIA_TYPE): lambda: AsciiProcessingManager,
'sniff_handler': sniff_handler, 'sniff_handler': sniff_handler,
} }

View File

@ -0,0 +1,4 @@
[plugin_spec]
thumbnail_font = string(default=None)

View File

@ -13,6 +13,7 @@
# #
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
import argparse
import chardet import chardet
import os import os
try: try:
@ -22,7 +23,11 @@ except ImportError:
import logging import logging
from mediagoblin import mg_globals as mgg from mediagoblin import mg_globals as mgg
from mediagoblin.processing import create_pub_filepath from mediagoblin.processing import (
create_pub_filepath, FilenameBuilder,
MediaProcessor, ProcessingManager,
get_process_filename, copy_original,
store_public, request_from_args)
from mediagoblin.media_types.ascii import asciitoimage from mediagoblin.media_types.ascii import asciitoimage
_log = logging.getLogger(__name__) _log = logging.getLogger(__name__)
@ -43,106 +48,202 @@ def sniff_handler(media_file, **kw):
return None return None
def process_ascii(proc_state): class CommonAsciiProcessor(MediaProcessor):
"""Code to process a txt file. Will be run by celery.
A Workbench() represents a local tempory dir. It is automatically
cleaned up when this function exits.
""" """
entry = proc_state.entry Provides a base for various ascii processing steps
workbench = proc_state.workbench """
ascii_config = mgg.global_config['media_type:mediagoblin.media_types.ascii'] acceptable_files = ['original', 'unicode']
# Conversions subdirectory to avoid collisions
conversions_subdir = os.path.join(
workbench.dir, 'conversions')
os.mkdir(conversions_subdir)
queued_filepath = entry.queued_media_file def common_setup(self):
queued_filename = workbench.localized_file( self.ascii_config = mgg.global_config['plugins'][
mgg.queue_store, queued_filepath, 'mediagoblin.media_types.ascii']
'source')
queued_file = file(queued_filename, 'rb') # Conversions subdirectory to avoid collisions
self.conversions_subdir = os.path.join(
self.workbench.dir, 'conversions')
os.mkdir(self.conversions_subdir)
with queued_file: # Pull down and set up the processing file
queued_file_charset = chardet.detect(queued_file.read()) self.process_filename = get_process_filename(
self.entry, self.workbench, self.acceptable_files)
self.name_builder = FilenameBuilder(self.process_filename)
self.charset = None
def copy_original(self):
copy_original(
self.entry, self.process_filename,
self.name_builder.fill('{basename}{ext}'))
def _detect_charset(self, orig_file):
d_charset = chardet.detect(orig_file.read())
# Only select a non-utf-8 charset if chardet is *really* sure # Only select a non-utf-8 charset if chardet is *really* sure
# Tested with "Feli\x0109an superjaron", which was detecte # Tested with "Feli\x0109an superjaron", which was detected
if queued_file_charset['confidence'] < 0.9: if d_charset['confidence'] < 0.9:
interpreted_charset = 'utf-8' self.charset = 'utf-8'
else: else:
interpreted_charset = queued_file_charset['encoding'] self.charset = d_charset['encoding']
_log.info('Charset detected: {0}\nWill interpret as: {1}'.format( _log.info('Charset detected: {0}\nWill interpret as: {1}'.format(
queued_file_charset, d_charset,
interpreted_charset)) self.charset))
queued_file.seek(0) # Rewind the queued file # Rewind the file
orig_file.seek(0)
thumb_filepath = create_pub_filepath( def store_unicode_file(self):
entry, 'thumbnail.png') with file(self.process_filename, 'rb') as orig_file:
self._detect_charset(orig_file)
unicode_filepath = create_pub_filepath(self.entry,
'ascii-portable.txt')
tmp_thumb_filename = os.path.join( with mgg.public_store.get_file(unicode_filepath, 'wb') \
conversions_subdir, thumb_filepath[-1]) as unicode_file:
# Decode the original file from its detected charset (or UTF8)
# Encode the unicode instance to ASCII and replace any
# non-ASCII with an HTML entity (&#
unicode_file.write(
unicode(orig_file.read().decode(
self.charset)).encode(
'ascii',
'xmlcharrefreplace'))
ascii_converter_args = {} self.entry.media_files['unicode'] = unicode_filepath
if ascii_config['thumbnail_font']: def generate_thumb(self, font=None, thumb_size=None):
ascii_converter_args.update( with file(self.process_filename, 'rb') as orig_file:
{'font': ascii_config['thumbnail_font']}) # If no font kwarg, check config
if not font:
font = self.ascii_config.get('thumbnail_font', None)
if not thumb_size:
thumb_size = (mgg.global_config['media:thumb']['max_width'],
mgg.global_config['media:thumb']['max_height'])
converter = asciitoimage.AsciiToImage( tmp_thumb = os.path.join(
**ascii_converter_args) self.conversions_subdir,
self.name_builder.fill('{basename}.thumbnail.png'))
thumb = converter._create_image( ascii_converter_args = {}
queued_file.read())
with file(tmp_thumb_filename, 'w') as thumb_file: # If there is a font from either the config or kwarg, update
thumb.thumbnail( # ascii_converter_args
(mgg.global_config['media:thumb']['max_width'], if font:
mgg.global_config['media:thumb']['max_height']), ascii_converter_args.update(
Image.ANTIALIAS) {'font': self.ascii_config['thumbnail_font']})
thumb.save(thumb_file)
_log.debug('Copying local file to public storage') converter = asciitoimage.AsciiToImage(
mgg.public_store.copy_local_to_storage( **ascii_converter_args)
tmp_thumb_filename, thumb_filepath)
queued_file.seek(0) thumb = converter._create_image(
orig_file.read())
original_filepath = create_pub_filepath(entry, queued_filepath[-1]) with file(tmp_thumb, 'w') as thumb_file:
thumb.thumbnail(
thumb_size,
Image.ANTIALIAS)
thumb.save(thumb_file)
with mgg.public_store.get_file(original_filepath, 'wb') \ _log.debug('Copying local file to public storage')
as original_file: store_public(self.entry, 'thumb', tmp_thumb,
original_file.write(queued_file.read()) self.name_builder.fill('{basename}.thumbnail.jpg'))
queued_file.seek(0) # Rewind *again*
unicode_filepath = create_pub_filepath(entry, 'ascii-portable.txt') class InitialProcessor(CommonAsciiProcessor):
"""
Initial processing step for new ascii media
"""
name = "initial"
description = "Initial processing"
with mgg.public_store.get_file(unicode_filepath, 'wb') \ @classmethod
as unicode_file: def media_is_eligible(cls, entry=None, state=None):
# Decode the original file from its detected charset (or UTF8) if not state:
# Encode the unicode instance to ASCII and replace any non-ASCII state = entry.state
# with an HTML entity (&# return state in (
unicode_file.write( "unprocessed", "failed")
unicode(queued_file.read().decode(
interpreted_charset)).encode(
'ascii',
'xmlcharrefreplace'))
# Remove queued media file from storage and database. @classmethod
# queued_filepath is in the task_id directory which should def generate_parser(cls):
# be removed too, but fail if the directory is not empty to be on parser = argparse.ArgumentParser(
# the super-safe side. description=cls.description,
mgg.queue_store.delete_file(queued_filepath) # rm file prog=cls.name)
mgg.queue_store.delete_dir(queued_filepath[:-1]) # rm dir
entry.queued_media_file = []
media_files_dict = entry.setdefault('media_files', {}) parser.add_argument(
media_files_dict['thumb'] = thumb_filepath '--thumb_size',
media_files_dict['unicode'] = unicode_filepath nargs=2,
media_files_dict['original'] = original_filepath metavar=('max_width', 'max_width'),
type=int)
entry.save() parser.add_argument(
'--font',
help='the thumbnail font')
return parser
@classmethod
def args_to_request(cls, args):
return request_from_args(
args, ['thumb_size', 'font'])
def process(self, thumb_size=None, font=None):
self.common_setup()
self.store_unicode_file()
self.generate_thumb(thumb_size=thumb_size, font=font)
self.copy_original()
self.delete_queue_file()
class Resizer(CommonAsciiProcessor):
"""
Resizing process steps for processed media
"""
name = 'resize'
description = 'Resize thumbnail'
thumb_size = 'thumb_size'
@classmethod
def media_is_eligible(cls, entry=None, state=None):
"""
Determine if this media type is eligible for processing
"""
if not state:
state = entry.state
return state in 'processed'
@classmethod
def generate_parser(cls):
parser = argparse.ArgumentParser(
description=cls.description,
prog=cls.name)
parser.add_argument(
'--thumb_size',
nargs=2,
metavar=('max_width', 'max_height'),
type=int)
# Needed for gmg reprocess thumbs to work
parser.add_argument(
'file',
nargs='?',
default='thumb',
choices=['thumb'])
return parser
@classmethod
def args_to_request(cls, args):
return request_from_args(
args, ['thumb_size', 'file'])
def process(self, thumb_size=None, file=None):
self.common_setup()
self.generate_thumb(thumb_size=thumb_size)
class AsciiProcessingManager(ProcessingManager):
def __init__(self):
super(self.__class__, self).__init__()
self.add_processor(InitialProcessor)
self.add_processor(Resizer)

View File

@ -15,7 +15,7 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
from mediagoblin.media_types import MediaManagerBase from mediagoblin.media_types import MediaManagerBase
from mediagoblin.media_types.audio.processing import process_audio, \ from mediagoblin.media_types.audio.processing import AudioProcessingManager, \
sniff_handler sniff_handler
from mediagoblin.tools import pluginapi from mediagoblin.tools import pluginapi
@ -32,8 +32,8 @@ def setup_plugin():
class AudioMediaManager(MediaManagerBase): class AudioMediaManager(MediaManagerBase):
human_readable = "Audio" human_readable = "Audio"
processor = staticmethod(process_audio)
display_template = "mediagoblin/media_displays/audio.html" display_template = "mediagoblin/media_displays/audio.html"
default_thumb = "images/media_thumbs/image.png"
def get_media_type_and_manager(ext): def get_media_type_and_manager(ext):
@ -45,4 +45,5 @@ hooks = {
'get_media_type_and_manager': get_media_type_and_manager, 'get_media_type_and_manager': get_media_type_and_manager,
'sniff_handler': sniff_handler, 'sniff_handler': sniff_handler,
('media_manager', MEDIA_TYPE): lambda: AudioMediaManager, ('media_manager', MEDIA_TYPE): lambda: AudioMediaManager,
('reprocess_manager', MEDIA_TYPE): lambda: AudioProcessingManager,
} }

View File

@ -0,0 +1,8 @@
[plugin_spec]
keep_original = boolean(default=True)
# vorbisenc quality
quality = float(default=0.3)
create_spectrogram = boolean(default=True)
spectrogram_fft_size = integer(default=4096)

View File

@ -14,16 +14,19 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
import argparse
import logging import logging
from tempfile import NamedTemporaryFile
import os import os
from mediagoblin import mg_globals as mgg from mediagoblin import mg_globals as mgg
from mediagoblin.processing import (create_pub_filepath, BadMediaFail, from mediagoblin.processing import (
FilenameBuilder, ProgressCallback) BadMediaFail, FilenameBuilder,
ProgressCallback, MediaProcessor, ProcessingManager,
request_from_args, get_process_filename,
store_public, copy_original)
from mediagoblin.media_types.audio.transcoders import (AudioTranscoder, from mediagoblin.media_types.audio.transcoders import (
AudioThumbnailer) AudioTranscoder, AudioThumbnailer)
_log = logging.getLogger(__name__) _log = logging.getLogger(__name__)
@ -39,121 +42,304 @@ def sniff_handler(media_file, **kw):
_log.debug('Audio discovery raised BadMediaFail') _log.debug('Audio discovery raised BadMediaFail')
return None return None
if data.is_audio == True and data.is_video == False: if data.is_audio is True and data.is_video is False:
return MEDIA_TYPE return MEDIA_TYPE
return None return None
def process_audio(proc_state): class CommonAudioProcessor(MediaProcessor):
"""Code to process uploaded audio. Will be run by celery.
A Workbench() represents a local tempory dir. It is automatically
cleaned up when this function exits.
""" """
entry = proc_state.entry Provides a base for various audio processing steps
workbench = proc_state.workbench """
audio_config = mgg.global_config['media_type:mediagoblin.media_types.audio'] acceptable_files = ['original', 'best_quality', 'webm_audio']
queued_filepath = entry.queued_media_file def common_setup(self):
queued_filename = workbench.localized_file( """
mgg.queue_store, queued_filepath, Setup the workbench directory and pull down the original file, add
'source') the audio_config, transcoder, thumbnailer and spectrogram_tmp path
name_builder = FilenameBuilder(queued_filename) """
self.audio_config = mgg \
.global_config['plugins']['mediagoblin.media_types.audio']
webm_audio_filepath = create_pub_filepath( # Pull down and set up the processing file
entry, self.process_filename = get_process_filename(
'{original}.webm'.format( self.entry, self.workbench, self.acceptable_files)
original=os.path.splitext( self.name_builder = FilenameBuilder(self.process_filename)
queued_filepath[-1])[0]))
if audio_config['keep_original']: self.transcoder = AudioTranscoder()
with open(queued_filename, 'rb') as queued_file: self.thumbnailer = AudioThumbnailer()
original_filepath = create_pub_filepath(
entry, name_builder.fill('{basename}{ext}'))
with mgg.public_store.get_file(original_filepath, 'wb') as \ def copy_original(self):
original_file: if self.audio_config['keep_original']:
_log.debug('Saving original...') copy_original(
original_file.write(queued_file.read()) self.entry, self.process_filename,
self.name_builder.fill('{basename}{ext}'))
entry.media_files['original'] = original_filepath def _keep_best(self):
"""
If there is no original, keep the best file that we have
"""
if not self.entry.media_files.get('best_quality'):
# Save the best quality file if no original?
if not self.entry.media_files.get('original') and \
self.entry.media_files.get('webm_audio'):
self.entry.media_files['best_quality'] = self.entry \
.media_files['webm_audio']
transcoder = AudioTranscoder() def transcode(self, quality=None):
if not quality:
quality = self.audio_config['quality']
with NamedTemporaryFile(dir=workbench.dir) as webm_audio_tmp: progress_callback = ProgressCallback(self.entry)
progress_callback = ProgressCallback(entry) webm_audio_tmp = os.path.join(self.workbench.dir,
self.name_builder.fill(
'{basename}{ext}'))
transcoder.transcode( self.transcoder.transcode(
queued_filename, self.process_filename,
webm_audio_tmp.name, webm_audio_tmp,
quality=audio_config['quality'], quality=quality,
progress_callback=progress_callback) progress_callback=progress_callback)
transcoder.discover(webm_audio_tmp.name) self.transcoder.discover(webm_audio_tmp)
self._keep_best()
_log.debug('Saving medium...') _log.debug('Saving medium...')
mgg.public_store.get_file(webm_audio_filepath, 'wb').write( store_public(self.entry, 'webm_audio', webm_audio_tmp,
webm_audio_tmp.read()) self.name_builder.fill('{basename}.medium.webm'))
entry.media_files['webm_audio'] = webm_audio_filepath def create_spectrogram(self, max_width=None, fft_size=None):
if not max_width:
max_width = mgg.global_config['media:medium']['max_width']
if not fft_size:
fft_size = self.audio_config['spectrogram_fft_size']
# entry.media_data_init(length=int(data.audiolength)) wav_tmp = os.path.join(self.workbench.dir, self.name_builder.fill(
'{basename}.ogg'))
if audio_config['create_spectrogram']: _log.info('Creating OGG source for spectrogram')
spectrogram_filepath = create_pub_filepath( self.transcoder.transcode(
entry, self.process_filename,
'{original}-spectrogram.jpg'.format( wav_tmp,
original=os.path.splitext( mux_string='vorbisenc quality={0} ! oggmux'.format(
queued_filepath[-1])[0])) self.audio_config['quality']))
with NamedTemporaryFile(dir=workbench.dir, suffix='.ogg') as wav_tmp: spectrogram_tmp = os.path.join(self.workbench.dir,
_log.info('Creating OGG source for spectrogram') self.name_builder.fill(
transcoder.transcode( '{basename}-spectrogram.jpg'))
queued_filename,
wav_tmp.name,
mux_string='vorbisenc quality={0} ! oggmux'.format(
audio_config['quality']))
thumbnailer = AudioThumbnailer() self.thumbnailer.spectrogram(
wav_tmp,
spectrogram_tmp,
width=max_width,
fft_size=fft_size)
with NamedTemporaryFile(dir=workbench.dir, suffix='.jpg') as spectrogram_tmp: _log.debug('Saving spectrogram...')
thumbnailer.spectrogram( store_public(self.entry, 'spectrogram', spectrogram_tmp,
wav_tmp.name, self.name_builder.fill('{basename}.spectrogram.jpg'))
spectrogram_tmp.name,
width=mgg.global_config['media:medium']['max_width'],
fft_size=audio_config['spectrogram_fft_size'])
_log.debug('Saving spectrogram...') def generate_thumb(self, size=None):
mgg.public_store.get_file(spectrogram_filepath, 'wb').write( if not size:
spectrogram_tmp.read()) max_width = mgg.global_config['media:thumb']['max_width']
max_height = mgg.global_config['media:thumb']['max_height']
size = (max_width, max_height)
entry.media_files['spectrogram'] = spectrogram_filepath thumb_tmp = os.path.join(self.workbench.dir, self.name_builder.fill(
'{basename}-thumbnail.jpg'))
with NamedTemporaryFile(dir=workbench.dir, suffix='.jpg') as thumb_tmp: # We need the spectrogram to create a thumbnail
thumbnailer.thumbnail_spectrogram( spectrogram = self.entry.media_files.get('spectrogram')
spectrogram_tmp.name, if not spectrogram:
thumb_tmp.name, _log.info('No spectrogram found, we will create one.')
(mgg.global_config['media:thumb']['max_width'], self.create_spectrogram()
mgg.global_config['media:thumb']['max_height'])) spectrogram = self.entry.media_files['spectrogram']
thumb_filepath = create_pub_filepath( spectrogram_filepath = mgg.public_store.get_local_path(spectrogram)
entry,
'{original}-thumbnail.jpg'.format(
original=os.path.splitext(
queued_filepath[-1])[0]))
mgg.public_store.get_file(thumb_filepath, 'wb').write( self.thumbnailer.thumbnail_spectrogram(
thumb_tmp.read()) spectrogram_filepath,
thumb_tmp,
tuple(size))
entry.media_files['thumb'] = thumb_filepath store_public(self.entry, 'thumb', thumb_tmp,
else: self.name_builder.fill('{basename}.thumbnail.jpg'))
entry.media_files['thumb'] = ['fake', 'thumb', 'path.jpg']
# Remove queued media file from storage and database.
# queued_filepath is in the task_id directory which should class InitialProcessor(CommonAudioProcessor):
# be removed too, but fail if the directory is not empty to be on """
# the super-safe side. Initial processing steps for new audio
mgg.queue_store.delete_file(queued_filepath) # rm file """
mgg.queue_store.delete_dir(queued_filepath[:-1]) # rm dir name = "initial"
entry.queued_media_file = [] description = "Initial processing"
@classmethod
def media_is_eligible(cls, entry=None, state=None):
"""
Determine if this media type is eligible for processing
"""
if not state:
state = entry.state
return state in (
"unprocessed", "failed")
@classmethod
def generate_parser(cls):
parser = argparse.ArgumentParser(
description=cls.description,
prog=cls.name)
parser.add_argument(
'--quality',
type=float,
help='vorbisenc quality. Range: -0.1..1')
parser.add_argument(
'--fft_size',
type=int,
help='spectrogram fft size')
parser.add_argument(
'--thumb_size',
nargs=2,
metavar=('max_width', 'max_height'),
type=int,
help='minimum size is 100 x 100')
parser.add_argument(
'--medium_width',
type=int,
help='The width of the spectogram')
parser.add_argument(
'--create_spectrogram',
action='store_true',
help='Create spectogram and thumbnail, will default to config')
return parser
@classmethod
def args_to_request(cls, args):
return request_from_args(
args, ['create_spectrogram', 'quality', 'fft_size',
'thumb_size', 'medium_width'])
def process(self, quality=None, fft_size=None, thumb_size=None,
create_spectrogram=None, medium_width=None):
self.common_setup()
if not create_spectrogram:
create_spectrogram = self.audio_config['create_spectrogram']
self.transcode(quality=quality)
self.copy_original()
if create_spectrogram:
self.create_spectrogram(max_width=medium_width, fft_size=fft_size)
self.generate_thumb(size=thumb_size)
self.delete_queue_file()
class Resizer(CommonAudioProcessor):
"""
Thumbnail and spectogram resizing process steps for processed audio
"""
name = 'resize'
description = 'Resize thumbnail or spectogram'
thumb_size = 'thumb_size'
@classmethod
def media_is_eligible(cls, entry=None, state=None):
"""
Determine if this media entry is eligible for processing
"""
if not state:
state = entry.state
return state in 'processed'
@classmethod
def generate_parser(cls):
parser = argparse.ArgumentParser(
description=cls.description,
prog=cls.name)
parser.add_argument(
'--fft_size',
type=int,
help='spectrogram fft size')
parser.add_argument(
'--thumb_size',
nargs=2,
metavar=('max_width', 'max_height'),
type=int,
help='minimum size is 100 x 100')
parser.add_argument(
'--medium_width',
type=int,
help='The width of the spectogram')
parser.add_argument(
'file',
choices=['thumb', 'spectrogram'])
return parser
@classmethod
def args_to_request(cls, args):
return request_from_args(
args, ['thumb_size', 'file', 'fft_size', 'medium_width'])
def process(self, file, thumb_size=None, fft_size=None,
medium_width=None):
self.common_setup()
if file == 'thumb':
self.generate_thumb(size=thumb_size)
elif file == 'spectrogram':
self.create_spectrogram(max_width=medium_width, fft_size=fft_size)
class Transcoder(CommonAudioProcessor):
"""
Transcoding processing steps for processed audio
"""
name = 'transcode'
description = 'Re-transcode audio'
@classmethod
def media_is_eligible(cls, entry=None, state=None):
if not state:
state = entry.state
return state in 'processed'
@classmethod
def generate_parser(cls):
parser = argparse.ArgumentParser(
description=cls.description,
prog=cls.name)
parser.add_argument(
'--quality',
help='vorbisenc quality. Range: -0.1..1')
return parser
@classmethod
def args_to_request(cls, args):
return request_from_args(
args, ['quality'])
def process(self, quality=None):
self.common_setup()
self.transcode(quality=quality)
class AudioProcessingManager(ProcessingManager):
def __init__(self):
super(self.__class__, self).__init__()
self.add_processor(InitialProcessor)
self.add_processor(Resizer)
self.add_processor(Transcoder)

View File

@ -122,8 +122,7 @@ class AudioThumbnailer(object):
int(start_x), 0, int(start_x), 0,
int(stop_x), int(im_h))) int(stop_x), int(im_h)))
if th.size[0] > th_w or th.size[1] > th_h: th.thumbnail(thumb_size, Image.ANTIALIAS)
th.thumbnail(thumb_size, Image.ANTIALIAS)
th.save(dst) th.save(dst)

View File

@ -14,24 +14,22 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
import datetime import datetime
import logging
from mediagoblin.media_types import MediaManagerBase from mediagoblin.media_types import MediaManagerBase
from mediagoblin.media_types.image.processing import process_image, \ from mediagoblin.media_types.image.processing import sniff_handler, \
sniff_handler ImageProcessingManager
from mediagoblin.tools import pluginapi
_log = logging.getLogger(__name__)
ACCEPTED_EXTENSIONS = ["jpg", "jpeg", "png", "gif", "tiff"] ACCEPTED_EXTENSIONS = ["jpg", "jpeg", "png", "gif", "tiff"]
MEDIA_TYPE = 'mediagoblin.media_types.image' MEDIA_TYPE = 'mediagoblin.media_types.image'
def setup_plugin():
config = pluginapi.get_config('mediagoblin.media_types.image')
class ImageMediaManager(MediaManagerBase): class ImageMediaManager(MediaManagerBase):
human_readable = "Image" human_readable = "Image"
processor = staticmethod(process_image)
display_template = "mediagoblin/media_displays/image.html" display_template = "mediagoblin/media_displays/image.html"
default_thumb = "images/media_thumbs/image.png" default_thumb = "images/media_thumbs/image.png"
@ -65,8 +63,8 @@ def get_media_type_and_manager(ext):
hooks = { hooks = {
'setup': setup_plugin,
'get_media_type_and_manager': get_media_type_and_manager, 'get_media_type_and_manager': get_media_type_and_manager,
'sniff_handler': sniff_handler, 'sniff_handler': sniff_handler,
('media_manager', MEDIA_TYPE): lambda: ImageMediaManager, ('media_manager', MEDIA_TYPE): lambda: ImageMediaManager,
('reprocess_manager', MEDIA_TYPE): lambda: ImageProcessingManager,
} }

View File

@ -0,0 +1,7 @@
[plugin_spec]
# One of BICUBIC, BILINEAR, NEAREST, ANTIALIAS
resize_filter = string(default="ANTIALIAS")
#level of compression used when resizing images
quality = integer(default=90)

View File

@ -20,9 +20,14 @@ except ImportError:
import Image import Image
import os import os
import logging import logging
import argparse
from mediagoblin import mg_globals as mgg from mediagoblin import mg_globals as mgg
from mediagoblin.processing import BadMediaFail, FilenameBuilder from mediagoblin.processing import (
BadMediaFail, FilenameBuilder,
MediaProcessor, ProcessingManager,
request_from_args, get_process_filename,
store_public, copy_original)
from mediagoblin.tools.exif import exif_fix_image_orientation, \ from mediagoblin.tools.exif import exif_fix_image_orientation, \
extract_exif, clean_exif, get_gps_data, get_useful, \ extract_exif, clean_exif, get_gps_data, get_useful, \
exif_image_needs_rotation exif_image_needs_rotation
@ -38,8 +43,8 @@ PIL_FILTERS = {
MEDIA_TYPE = 'mediagoblin.media_types.image' MEDIA_TYPE = 'mediagoblin.media_types.image'
def resize_image(proc_state, resized, keyname, target_name, new_size, def resize_image(entry, resized, keyname, target_name, new_size,
exif_tags, workdir): exif_tags, workdir, quality, filter):
""" """
Store a resized version of an image and return its pathname. Store a resized version of an image and return its pathname.
@ -51,17 +56,16 @@ def resize_image(proc_state, resized, keyname, target_name, new_size,
exif_tags -- EXIF data for the original image exif_tags -- EXIF data for the original image
workdir -- directory path for storing converted image files workdir -- directory path for storing converted image files
new_size -- 2-tuple size for the resized image new_size -- 2-tuple size for the resized image
quality -- level of compression used when resizing images
filter -- One of BICUBIC, BILINEAR, NEAREST, ANTIALIAS
""" """
config = mgg.global_config['media_type:mediagoblin.media_types.image']
resized = exif_fix_image_orientation(resized, exif_tags) # Fix orientation resized = exif_fix_image_orientation(resized, exif_tags) # Fix orientation
filter_config = config['resize_filter']
try: try:
resize_filter = PIL_FILTERS[filter_config.upper()] resize_filter = PIL_FILTERS[filter.upper()]
except KeyError: except KeyError:
raise Exception('Filter "{0}" not found, choose one of {1}'.format( raise Exception('Filter "{0}" not found, choose one of {1}'.format(
unicode(filter_config), unicode(filter),
u', '.join(PIL_FILTERS.keys()))) u', '.join(PIL_FILTERS.keys())))
resized.thumbnail(new_size, resize_filter) resized.thumbnail(new_size, resize_filter)
@ -69,32 +73,36 @@ def resize_image(proc_state, resized, keyname, target_name, new_size,
# Copy the new file to the conversion subdir, then remotely. # Copy the new file to the conversion subdir, then remotely.
tmp_resized_filename = os.path.join(workdir, target_name) tmp_resized_filename = os.path.join(workdir, target_name)
with file(tmp_resized_filename, 'w') as resized_file: with file(tmp_resized_filename, 'w') as resized_file:
resized.save(resized_file, quality=config['quality']) resized.save(resized_file, quality=quality)
proc_state.store_public(keyname, tmp_resized_filename, target_name) store_public(entry, keyname, tmp_resized_filename, target_name)
def resize_tool(proc_state, force, keyname, target_name, def resize_tool(entry,
conversions_subdir, exif_tags): force, keyname, orig_file, target_name,
# filename -- the filename of the original image being resized conversions_subdir, exif_tags, quality, filter, new_size=None):
filename = proc_state.get_queued_filename() # Use the default size if new_size was not given
max_width = mgg.global_config['media:' + keyname]['max_width'] if not new_size:
max_height = mgg.global_config['media:' + keyname]['max_height'] max_width = mgg.global_config['media:' + keyname]['max_width']
max_height = mgg.global_config['media:' + keyname]['max_height']
new_size = (max_width, max_height)
# If the size of the original file exceeds the specified size for the desized # If the size of the original file exceeds the specified size for the desized
# file, a target_name file is created and later associated with the media # file, a target_name file is created and later associated with the media
# entry. # entry.
# Also created if the file needs rotation, or if forced. # Also created if the file needs rotation, or if forced.
try: try:
im = Image.open(filename) im = Image.open(orig_file)
except IOError: except IOError:
raise BadMediaFail() raise BadMediaFail()
if force \ if force \
or im.size[0] > max_width \ or im.size[0] > new_size[0]\
or im.size[1] > max_height \ or im.size[1] > new_size[1]\
or exif_image_needs_rotation(exif_tags): or exif_image_needs_rotation(exif_tags):
resize_image( resize_image(
proc_state, im, unicode(keyname), target_name, entry, im, unicode(keyname), target_name,
(max_width, max_height), tuple(new_size),
exif_tags, conversions_subdir) exif_tags, conversions_subdir,
quality, filter)
SUPPORTED_FILETYPES = ['png', 'gif', 'jpg', 'jpeg', 'tiff'] SUPPORTED_FILETYPES = ['png', 'gif', 'jpg', 'jpeg', 'tiff']
@ -119,53 +127,210 @@ def sniff_handler(media_file, **kw):
return None return None
def process_image(proc_state): class CommonImageProcessor(MediaProcessor):
"""Code to process an image. Will be run by celery.
A Workbench() represents a local tempory dir. It is automatically
cleaned up when this function exits.
""" """
entry = proc_state.entry Provides a base for various media processing steps
workbench = proc_state.workbench """
# list of acceptable file keys in order of prefrence for reprocessing
acceptable_files = ['original', 'medium']
# Conversions subdirectory to avoid collisions def common_setup(self):
conversions_subdir = os.path.join( """
workbench.dir, 'conversions') Set up the workbench directory and pull down the original file
os.mkdir(conversions_subdir) """
self.image_config = mgg.global_config['plugins'][
'mediagoblin.media_types.image']
queued_filename = proc_state.get_queued_filename() ## @@: Should this be two functions?
name_builder = FilenameBuilder(queued_filename) # Conversions subdirectory to avoid collisions
self.conversions_subdir = os.path.join(
self.workbench.dir, 'conversions')
os.mkdir(self.conversions_subdir)
# EXIF extraction # Pull down and set up the processing file
exif_tags = extract_exif(queued_filename) self.process_filename = get_process_filename(
gps_data = get_gps_data(exif_tags) self.entry, self.workbench, self.acceptable_files)
self.name_builder = FilenameBuilder(self.process_filename)
# Always create a small thumbnail # Exif extraction
resize_tool(proc_state, True, 'thumb', self.exif_tags = extract_exif(self.process_filename)
name_builder.fill('{basename}.thumbnail{ext}'),
conversions_subdir, exif_tags)
# Possibly create a medium def generate_medium_if_applicable(self, size=None, quality=None,
resize_tool(proc_state, False, 'medium', filter=None):
name_builder.fill('{basename}.medium{ext}'), if not quality:
conversions_subdir, exif_tags) quality = self.image_config['quality']
if not filter:
filter = self.image_config['resize_filter']
# Copy our queued local workbench to its final destination resize_tool(self.entry, False, 'medium', self.process_filename,
proc_state.copy_original(name_builder.fill('{basename}{ext}')) self.name_builder.fill('{basename}.medium{ext}'),
self.conversions_subdir, self.exif_tags, quality,
filter, size)
# Remove queued media file from storage and database def generate_thumb(self, size=None, quality=None, filter=None):
proc_state.delete_queue_file() if not quality:
quality = self.image_config['quality']
if not filter:
filter = self.image_config['resize_filter']
# Insert exif data into database resize_tool(self.entry, True, 'thumb', self.process_filename,
exif_all = clean_exif(exif_tags) self.name_builder.fill('{basename}.thumbnail{ext}'),
self.conversions_subdir, self.exif_tags, quality,
filter, size)
if len(exif_all): def copy_original(self):
entry.media_data_init(exif_all=exif_all) copy_original(
self.entry, self.process_filename,
self.name_builder.fill('{basename}{ext}'))
if len(gps_data): def extract_metadata(self):
for key in list(gps_data.keys()): # Is there any GPS data
gps_data['gps_' + key] = gps_data.pop(key) gps_data = get_gps_data(self.exif_tags)
entry.media_data_init(**gps_data)
# Insert exif data into database
exif_all = clean_exif(self.exif_tags)
if len(exif_all):
self.entry.media_data_init(exif_all=exif_all)
if len(gps_data):
for key in list(gps_data.keys()):
gps_data['gps_' + key] = gps_data.pop(key)
self.entry.media_data_init(**gps_data)
class InitialProcessor(CommonImageProcessor):
"""
Initial processing step for new images
"""
name = "initial"
description = "Initial processing"
@classmethod
def media_is_eligible(cls, entry=None, state=None):
"""
Determine if this media type is eligible for processing
"""
if not state:
state = entry.state
return state in (
"unprocessed", "failed")
###############################
# Command line interface things
###############################
@classmethod
def generate_parser(cls):
parser = argparse.ArgumentParser(
description=cls.description,
prog=cls.name)
parser.add_argument(
'--size',
nargs=2,
metavar=('max_width', 'max_height'),
type=int)
parser.add_argument(
'--thumb-size',
nargs=2,
metavar=('max_width', 'max_height'),
type=int)
parser.add_argument(
'--filter',
choices=['BICUBIC', 'BILINEAR', 'NEAREST', 'ANTIALIAS'])
parser.add_argument(
'--quality',
type=int,
help='level of compression used when resizing images')
return parser
@classmethod
def args_to_request(cls, args):
return request_from_args(
args, ['size', 'thumb_size', 'filter', 'quality'])
def process(self, size=None, thumb_size=None, quality=None, filter=None):
self.common_setup()
self.generate_medium_if_applicable(size=size, filter=filter,
quality=quality)
self.generate_thumb(size=thumb_size, filter=filter, quality=quality)
self.copy_original()
self.extract_metadata()
self.delete_queue_file()
class Resizer(CommonImageProcessor):
"""
Resizing process steps for processed media
"""
name = 'resize'
description = 'Resize image'
thumb_size = 'size'
@classmethod
def media_is_eligible(cls, entry=None, state=None):
"""
Determine if this media type is eligible for processing
"""
if not state:
state = entry.state
return state in 'processed'
###############################
# Command line interface things
###############################
@classmethod
def generate_parser(cls):
parser = argparse.ArgumentParser(
description=cls.description,
prog=cls.name)
parser.add_argument(
'--size',
nargs=2,
metavar=('max_width', 'max_height'),
type=int)
parser.add_argument(
'--filter',
choices=['BICUBIC', 'BILINEAR', 'NEAREST', 'ANTIALIAS'])
parser.add_argument(
'--quality',
type=int,
help='level of compression used when resizing images')
parser.add_argument(
'file',
choices=['medium', 'thumb'])
return parser
@classmethod
def args_to_request(cls, args):
return request_from_args(
args, ['size', 'file', 'quality', 'filter'])
def process(self, file, size=None, filter=None, quality=None):
self.common_setup()
if file == 'medium':
self.generate_medium_if_applicable(size=size, filter=filter,
quality=quality)
elif file == 'thumb':
self.generate_thumb(size=size, filter=filter, quality=quality)
class ImageProcessingManager(ProcessingManager):
def __init__(self):
super(self.__class__, self).__init__()
self.add_processor(InitialProcessor)
self.add_processor(Resizer)
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -15,21 +15,16 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
from mediagoblin.media_types import MediaManagerBase from mediagoblin.media_types import MediaManagerBase
from mediagoblin.media_types.pdf.processing import process_pdf, \ from mediagoblin.media_types.pdf.processing import PdfProcessingManager, \
sniff_handler sniff_handler
from mediagoblin.tools import pluginapi
ACCEPTED_EXTENSIONS = ['pdf'] ACCEPTED_EXTENSIONS = ['pdf']
MEDIA_TYPE = 'mediagoblin.media_types.pdf' MEDIA_TYPE = 'mediagoblin.media_types.pdf'
def setup_plugin():
config = pluginapi.get_config(MEDIA_TYPE)
class PDFMediaManager(MediaManagerBase): class PDFMediaManager(MediaManagerBase):
human_readable = "PDF" human_readable = "PDF"
processor = staticmethod(process_pdf)
display_template = "mediagoblin/media_displays/pdf.html" display_template = "mediagoblin/media_displays/pdf.html"
default_thumb = "images/media_thumbs/pdf.jpg" default_thumb = "images/media_thumbs/pdf.jpg"
@ -40,8 +35,8 @@ def get_media_type_and_manager(ext):
hooks = { hooks = {
'setup': setup_plugin,
'get_media_type_and_manager': get_media_type_and_manager, 'get_media_type_and_manager': get_media_type_and_manager,
'sniff_handler': sniff_handler, 'sniff_handler': sniff_handler,
('media_manager', MEDIA_TYPE): lambda: PDFMediaManager, ('media_manager', MEDIA_TYPE): lambda: PDFMediaManager,
('reprocess_manager', MEDIA_TYPE): lambda: PdfProcessingManager,
} }

View File

@ -0,0 +1,5 @@
[plugin_spec]
pdf_js = boolean(default=True)

View File

@ -13,14 +13,18 @@
# #
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
import argparse
import os import os
import logging import logging
import dateutil.parser import dateutil.parser
from subprocess import PIPE, Popen from subprocess import PIPE, Popen
from mediagoblin import mg_globals as mgg from mediagoblin import mg_globals as mgg
from mediagoblin.processing import (create_pub_filepath, from mediagoblin.processing import (
FilenameBuilder, BadMediaFail) FilenameBuilder, BadMediaFail,
MediaProcessor, ProcessingManager,
request_from_args, get_process_filename,
store_public, copy_original)
from mediagoblin.tools.translate import fake_ugettext_passthrough as _ from mediagoblin.tools.translate import fake_ugettext_passthrough as _
_log = logging.getLogger(__name__) _log = logging.getLogger(__name__)
@ -230,51 +234,207 @@ def pdf_info(original):
return ret_dict return ret_dict
def process_pdf(proc_state):
"""Code to process a pdf file. Will be run by celery.
A Workbench() represents a local tempory dir. It is automatically class CommonPdfProcessor(MediaProcessor):
cleaned up when this function exits.
""" """
entry = proc_state.entry Provides a base for various pdf processing steps
workbench = proc_state.workbench """
acceptable_files = ['original', 'pdf']
queued_filename = proc_state.get_queued_filename() def common_setup(self):
name_builder = FilenameBuilder(queued_filename) """
Set up common pdf processing steps
"""
# Pull down and set up the processing file
self.process_filename = get_process_filename(
self.entry, self.workbench, self.acceptable_files)
self.name_builder = FilenameBuilder(self.process_filename)
# Copy our queued local workbench to its final destination self._set_pdf_filename()
original_dest = name_builder.fill('{basename}{ext}')
proc_state.copy_original(original_dest) def _set_pdf_filename(self):
if self.name_builder.ext == '.pdf':
self.pdf_filename = self.process_filename
elif self.entry.media_files.get('pdf'):
self.pdf_filename = self.workbench.localized_file(
mgg.public_store, self.entry.media_files['pdf'])
else:
self.pdf_filename = self._generate_pdf()
def copy_original(self):
copy_original(
self.entry, self.process_filename,
self.name_builder.fill('{basename}{ext}'))
def generate_thumb(self, thumb_size=None):
if not thumb_size:
thumb_size = (mgg.global_config['media:thumb']['max_width'],
mgg.global_config['media:thumb']['max_height'])
# Note: pdftocairo adds '.png', so don't include an ext
thumb_filename = os.path.join(self.workbench.dir,
self.name_builder.fill(
'{basename}.thumbnail'))
executable = where('pdftocairo')
args = [executable, '-scale-to', str(min(thumb_size)),
'-singlefile', '-png', self.pdf_filename, thumb_filename]
_log.debug('calling {0}'.format(repr(' '.join(args))))
Popen(executable=executable, args=args).wait()
# since pdftocairo added '.png', we need to include it with the
# filename
store_public(self.entry, 'thumb', thumb_filename + '.png',
self.name_builder.fill('{basename}.thumbnail.png'))
def _generate_pdf(self):
"""
Store the pdf. If the file is not a pdf, make it a pdf
"""
tmp_pdf = self.process_filename
# Create a pdf if this is a different doc, store pdf for viewer
ext = queued_filename.rsplit('.', 1)[-1].lower()
if ext == 'pdf':
pdf_filename = queued_filename
else:
pdf_filename = queued_filename.rsplit('.', 1)[0] + '.pdf'
unoconv = where('unoconv') unoconv = where('unoconv')
Popen(executable=unoconv, Popen(executable=unoconv,
args=[unoconv, '-v', '-f', 'pdf', queued_filename]).wait() args=[unoconv, '-v', '-f', 'pdf', self.process_filename]).wait()
if not os.path.exists(pdf_filename):
if not os.path.exists(tmp_pdf):
_log.debug('unoconv failed to convert file to pdf') _log.debug('unoconv failed to convert file to pdf')
raise BadMediaFail() raise BadMediaFail()
proc_state.store_public(keyname=u'pdf', local_file=pdf_filename)
pdf_info_dict = pdf_info(pdf_filename) store_public(self.entry, 'pdf', tmp_pdf,
self.name_builder.fill('{basename}.pdf'))
for name, width, height in [ return self.workbench.localized_file(
(u'thumb', mgg.global_config['media:thumb']['max_width'], mgg.public_store, self.entry.media_files['pdf'])
mgg.global_config['media:thumb']['max_height']),
(u'medium', mgg.global_config['media:medium']['max_width'],
mgg.global_config['media:medium']['max_height']),
]:
filename = name_builder.fill('{basename}.%s.png' % name)
path = workbench.joinpath(filename)
create_pdf_thumb(pdf_filename, path, width, height)
assert(os.path.exists(path))
proc_state.store_public(keyname=name, local_file=path)
proc_state.delete_queue_file() def extract_pdf_info(self):
pdf_info_dict = pdf_info(self.pdf_filename)
self.entry.media_data_init(**pdf_info_dict)
entry.media_data_init(**pdf_info_dict) def generate_medium(self, size=None):
entry.save() if not size:
size = (mgg.global_config['media:medium']['max_width'],
mgg.global_config['media:medium']['max_height'])
# Note: pdftocairo adds '.png', so don't include an ext
filename = os.path.join(self.workbench.dir,
self.name_builder.fill('{basename}.medium'))
executable = where('pdftocairo')
args = [executable, '-scale-to', str(min(size)),
'-singlefile', '-png', self.pdf_filename, filename]
_log.debug('calling {0}'.format(repr(' '.join(args))))
Popen(executable=executable, args=args).wait()
# since pdftocairo added '.png', we need to include it with the
# filename
store_public(self.entry, 'medium', filename + '.png',
self.name_builder.fill('{basename}.medium.png'))
class InitialProcessor(CommonPdfProcessor):
"""
Initial processing step for new pdfs
"""
name = "initial"
description = "Initial processing"
@classmethod
def media_is_eligible(cls, entry=None, state=None):
"""
Determine if this media type is eligible for processing
"""
if not state:
state = entry.state
return state in (
"unprocessed", "failed")
@classmethod
def generate_parser(cls):
parser = argparse.ArgumentParser(
description=cls.description,
prog=cls.name)
parser.add_argument(
'--size',
nargs=2,
metavar=('max_width', 'max_height'),
type=int)
parser.add_argument(
'--thumb-size',
nargs=2,
metavar=('max_width', 'max_height'),
type=int)
return parser
@classmethod
def args_to_request(cls, args):
return request_from_args(
args, ['size', 'thumb_size'])
def process(self, size=None, thumb_size=None):
self.common_setup()
self.extract_pdf_info()
self.copy_original()
self.generate_medium(size=size)
self.generate_thumb(thumb_size=thumb_size)
self.delete_queue_file()
class Resizer(CommonPdfProcessor):
"""
Resizing process steps for processed pdfs
"""
name = 'resize'
description = 'Resize thumbnail and medium'
thumb_size = 'size'
@classmethod
def media_is_eligible(cls, entry=None, state=None):
"""
Determine if this media type is eligible for processing
"""
if not state:
state = entry.state
return state in 'processed'
@classmethod
def generate_parser(cls):
parser = argparse.ArgumentParser(
description=cls.description,
prog=cls.name)
parser.add_argument(
'--size',
nargs=2,
metavar=('max_width', 'max_height'),
type=int)
parser.add_argument(
'file',
choices=['medium', 'thumb'])
return parser
@classmethod
def args_to_request(cls, args):
return request_from_args(
args, ['size', 'file'])
def process(self, file, size=None):
self.common_setup()
if file == 'medium':
self.generate_medium(size=size)
elif file == 'thumb':
self.generate_thumb(thumb_size=size)
class PdfProcessingManager(ProcessingManager):
def __init__(self):
super(self.__class__, self).__init__()
self.add_processor(InitialProcessor)
self.add_processor(Resizer)

View File

@ -15,21 +15,16 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
from mediagoblin.media_types import MediaManagerBase from mediagoblin.media_types import MediaManagerBase
from mediagoblin.media_types.stl.processing import process_stl, \ from mediagoblin.media_types.stl.processing import StlProcessingManager, \
sniff_handler sniff_handler
from mediagoblin.tools import pluginapi
MEDIA_TYPE = 'mediagoblin.media_types.stl' MEDIA_TYPE = 'mediagoblin.media_types.stl'
ACCEPTED_EXTENSIONS = ["obj", "stl"] ACCEPTED_EXTENSIONS = ["obj", "stl"]
def setup_plugin():
config = pluginapi.get_config(MEDIA_TYPE)
class STLMediaManager(MediaManagerBase): class STLMediaManager(MediaManagerBase):
human_readable = "stereo lithographics" human_readable = "stereo lithographics"
processor = staticmethod(process_stl)
display_template = "mediagoblin/media_displays/stl.html" display_template = "mediagoblin/media_displays/stl.html"
default_thumb = "images/media_thumbs/video.jpg" default_thumb = "images/media_thumbs/video.jpg"
@ -39,8 +34,8 @@ def get_media_type_and_manager(ext):
return MEDIA_TYPE, STLMediaManager return MEDIA_TYPE, STLMediaManager
hooks = { hooks = {
'setup': setup_plugin,
'get_media_type_and_manager': get_media_type_and_manager, 'get_media_type_and_manager': get_media_type_and_manager,
'sniff_handler': sniff_handler, 'sniff_handler': sniff_handler,
('media_manager', MEDIA_TYPE): lambda: STLMediaManager, ('media_manager', MEDIA_TYPE): lambda: STLMediaManager,
('reprocess_manager', MEDIA_TYPE): lambda: StlProcessingManager,
} }

Some files were not shown because too many files have changed in this diff Show More