Using the brand new itsdangerous sessions to power the
sessions for piwigo.
The real point is: Clients want to have the session in a
"pwg_id" cookie and don't accept any other cookie name.
Initially I was going to write a failing test for refresh tokens. Thus
this fix includes an orphaned 'expect_failure' method in test utils.
I ended up writing support for OAuth refresh tokens, as well as a lot of
cleanup (hopefully) in the OAuth plugin code.
**Rebase**: While waiting for this stuff to be merged, the testing
framework changed, it comes with batteries included regarding fails.
Removed legacy nosetest helper.
Also added a lot of backref=backref([...], cascade='all, delete-orphan')
* JDShu/649_use_form_data_field:
Use WTForms data field in user_pages/views.py
Use WTForms data field in auth/views.py
auth: whitespace cleanup in views.py
Use WTForms data field in plugins/oauth/views.py
Use WTForms data field in submit/views.py
Use WTForms data field in edit/views.py
- pwg.session.getStatus returns the current user as
"fake_user". When we have a session, we'll return
something better.
- pwg.categories.getList add a name and the parent id for
its one and only "collection".
- Improve logging a bit.
shotwell needs a pwg_id cookie to continue.
And really, it's the only cookie it supports, so in the
long run, we need to send a proper session cookie as
pwg_id.
Check for CELERY_CONFIG_MODULE before we import raven.contrib.celery. It
seems that the import otherwise sets up the celery client before we get
to pass it our mediagoblin-specific settings.
Removed the translation marking and passed in empty strings to avoid
WTForms automagically creating the labels from the field names (i.e.
client_id => 'Client Id').
The response headers were never getting set because of a bug in the 7c552c0
commit. This expands the loop into a more readable form and results in the
headers getting set.
The template in the geolocation plugin still used the old
config option. Just remove that. To enable it, you enable
the plugin. No need for extra config.
Tested by manwesulimo2004 (via IRC).
- I'm having trouble seeing if the geolocation stuff actually works,
but plugins are included
- including a list of template hooks works, however the macro to
include them does not, so it's kinda verbose
- Added start of template hook code to pluginapi.py
- Started to break openstreetmap into plugin; moved templates
- Added plugin hooks in media and image media templates
... almost certainly, none of this works yet. :)
People(tm) want to start run_process_media from the CLI and might not
have a request object handy. So pass in the feed_url into
run_process_media rather than the request object and allow the feed url
to be empty (resulting in no PuSH notification at all then).
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
Notifying the PuSH servers had 3 problems.
1) it was done immediately after sending of the processing task to celery. So if celery was run in a separate
process we would notify the PuSH servers before the new media was processed/
visible. (#436)
2) Notification code was called in submit/views.py, so submitting via the
API never resulted in notifications. (#585)
3) If Notifying the PuSH server failed, we would never retry.
The solution was to make the PuSH notification an asynchronous subtask. This
way: 1) it will only be called once async processing has finished, 2) it
is in the main processing code path, so even API calls will result in
notifications, and 3) We retry 3 times in case of failure before giving up.
If the server is in a separate process, we will wait 3x 2 minutes before
retrying the notification.
The only downside is that the celery server needs to have access to the internet
to ping the PuSH server. If that is a problem, we need to make the task belong
to a special group of celery servers that has access to the internet.
As a side effect, I believe I removed the limitation that prevented us from
upgrading celery.
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
Factor all the migration related stuff out into a new
.db.sql.migration_tools.
First we don't have to load this module for our normal
server.
Second it makes all the import dependencies a little more
cleaner.
This concludes the db.sql.* -> db.* move. Our db abstraction layer is
sqlalchemy, so there is no need to a separate db.sql.* hierarchy.
All tests have been run for each of the commit series to make sure
everything works at every step.
Now that sqlalchemy is providing the database abstractions, there is no
need to hide everything in db.sql. sub-modules. It complicates the code
and provides a futher layer of indirection.
Move the db.sql.util.py to db.util.py and adapt the importers.
First rename prepare_entry to prepare_queue_task, because
this is really more like what this thing does.
Thanks to Velmont for noting that we do not need a request
in here, but an "app" is good enough. Which means, that
this stuff can be called from tool scripts too.