In case we want to bundle db actions into a single transaction, we
can now use delete(commit=False) to prevent the transaction from being
committed immediately. This is useful when e.g. deleting a User() and
thousands of his MediaEntries in a single commit.
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
We have migrations creating new tables. Those currently use
"raw" table definitions. This easily gives errors (we
already had this problem).
So instead rewrite those to use declarative tables and use
those to create new tables. Just copy the new table over to
the migration, strip it down to the bare minimum, rename to
_v0, base it on declarative_base() and be done!
Do this for the current migrations.
So far templates required a very complex blurb to simply insert a
thumbnail URL, exposing much of the internal logic to the template
designer. In addition, we would fail with an error if for some
reason the media_files['thumb'] entry was never populated.
This adds the MediaEntry.thumb_url property that template designers
can simply use. It will do the right thing, either fetching the proper
thumbnail or hand back a generic icon specified in a media's
MEDIA_MANAGER as "default_thumb".
Add an image default fallback icon (stolen from Tangos, which are
Public Domain since version 0.8.90 as I understand) since the one
we referred to was not existing. Perhaps, a "broken image" icon
would be better, but I'll leave that to our capable designers.
All templates have been modified to make use of the new thumb_url
function.
Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
- Added HTTPError catching around the callback request, to not mark the
entry as failed, just log the exception.
- Fixed bug where I forgot to actually fetch the entry before passing it
to json_processing_callback.
- Changed __main__ migration #6 to create the ProcessingMetaData table
as it is currently, to prevent possible breakage if a siteadmin
is lagging behind with his db migrations and more than one migration
wants to fix stuff with the ProcessingMetaData table.
THE MIGRATIONS SUPPLIED WITH THIS COMMIT WILL DROP AND RE-CREATE YOUR
oauth__tokens AND oauth__codes TABLES. ALL YOUR OAUTH CODES AND TOKENS
WILL BE LOST.
- Fixed pylint issues in db/sql/migrations.
- Added __repr__ to the User model.
- Added _disable_cors option to json_response.
- Added crude error handling to the api.tools.api_auth decorator
- Updated the OAuth README.
- Added client registration, client overview, connection overview,
client authorization views and templates.
- Added error handling to the OAuthAuth Auth object.
- Added AuthorizationForm, ClientRegistrationForm in oauth/forms.
- Added migrations for OAuth, added client registration migration.
- Added OAuthClient, OAuthUserClient models.
- Added oauth/tools with require_client_auth decorator method.
- Added progress meter for video and audio media types.
- Changed the __repr__ method of a MediaEntry to display a bit more
useful explanation.
- Added a new MediaEntry.state, 'processing', which means that the task
is running the processor on the item currently.
- Fixed some PEP8 issues in user_pages/views.py
- Fixed the ATOM TAG URI to show the correct year.
- It is now possible to actually see what's processing, due to a bug fix
where __getitem__ was called on the db model.
- Removed DEPRECATED message from the docstring, it wasn't true.
sqlite doesn't like complex changes (alter table) to happen
inside a transaction that has already done other things.
And really, each migration should say "I'm done" and commit
its changes.
This is not the full story, but it's the core of it.
Specifially the migration framework should probably do a
rollback "just in case" after each migration.
The cleanup could be missed if the request handling code in
app.py:__call__ exits early (due to exception, or due to
one of those early "return"s).
So to make sure the sql session is cleaned up for real,
wrap the whole thing in a try: finally:.
Also wrote a short tool to test if the session is actually
empty. The tool is currently disabled, but ready to be
used.
After converting everything, check what is actually used in
the db. For media_types that are not used, drop all the
media_data tables and remove the migration info.
This switches the whole source code over to use sql instead
of mongodb. It's a pretty easy change, but changes nearly
the complete way things work. Hopefully everythong works!
The JSON fields are really "dumb stuff in here" fields.
They are not intended to get indexed or anything. And they
can get large. For example the exif_all field in one of my
simple tests is nearly 7 kB large. Although VARCHAR might
work, TEXT feels just better as the storage type.
1. No need to drop media_data['exif'], we only have and
want media_data['exif_all'].
2. Use media['_id'] instead of media._id (better not use
dot-notation on mongo objects in such a low level tool).
MediaEntry.media_data.exif_all will contain all the
"clean" EXIF data.
MediaEntry.exif_display_iter() is an iterator that fetches
the most interesting entries for display from that data.