Docs:
http://docs.sqlalchemy.org/en/latest/core/engines.html#configuring-logging
So for an application utilizing python logging for real
(and MediaGoblin should) the rule is:
- Don't use echo=True,
- but reconfigure the appropiate loggers' level.
So replaced the echo=True by a line to reconfigure the
appropiate logger to achieve the same effect.
This still dumps whole bloats of SQL queries into the main
log, but at least they're not duped any more.
The name part of a MediaFile is only using a very limited
number of items. Currently things like "original" or
"thumb".
So instead of storing the string on each entry, just store
a short integer referencing the FileKeynames table and have
the appropiate string there.
Using the new check_media_slug_used it is possible to have
one generic generate_slug in the mixin class instead of in
each db class.
In the sql variant self.id is not always set: If the slug
alone would create a dupe the current code decides for "no
slug at all".
In two cases (generating a new slug and editing the slug)
it is nice to know in advance (before the db gets angry)
that the slug is used/free. So created a db utility
function to check for this on mongo and sql:
check_media_slug_used()
The current SQL layout/sqlalchemy strucuture can't detect
whether a slug isn't needed any more and delete it. So
provide a tool function to cleanup unused slugs.
It's currently not hooked to any gmg function!
In some cases (notably the mark_entry_failed function) it
is useful to have atomic update functionality on the db. On
mongo this requires special syntax.
So created an atomic_update function for mongo and started
to use it in mark_entry_failed.
* media_data_start:
And media_data_init() for sql as a dummy
Create a fake MediaEntry.media_data for sql
Video media_data: Change layout in the mongo world
So that the SQL backend is more useable, let the MediaEntry
have a faked media_data.
It's extremely fake: The returned dict is always a new one.
So any stored info is even lost!
Change the media_data for video from
entry.media_data["video"] to use entry.media_data directly.
Also start a bare MediaEntry.media_data_init(**kwargs)
method for setting up the media_data and possibly
initialising it with kwargs.
It's good practice to cleanup the SQL session after each
request so that the next request gets a fresh one.
It's an application decision whether one wants a
just-in-case ROLLBACK or COMMIT. There are two ideas behind
it, really. I have decided for ROLLBACK. The idea is "if
you forget to commit your changes yourself, there's
something broken. Maybe you got an exception?".
attachments working with the sql backend.
- SQL Schema for attachment files, ordering attachments by
their name, not by the submission order (as earlier).
- Dot-Notation for attachments, where missing.
- convert existing attachments over from mongo -> sql
I don't know exactly why, but an exception during
processing hasn't found its way up. The entry was marked as
failed and that was it. So I decided to add a _log.warn to
the part of mark_entry_failed that handles general
exceptions.
* cwebber/celerysql:
Adjust unit tests to match new celery/kombu sqlalchemy setup
"database" is not the sqlalchemy kombu transport... should be "sqlalchemy"
Celery and kombu databases should also be .gitignore'd
kombu-sqlalchemy a requirement in order for kombu sqlalchemy transport to work
Move mediagoblin dbs out of user_dev for race condition directory-creation reasons.
Give kombu its own db. Responding to Elrond "sqlite will lock all the time!" :)
Apparently an absolute path is three slashes after sqlite:. Thx elrond.
Should be all that's needed to switch celery/kombu settings to sqlalchemy
Some parts in the code like to use .setdefault(). So make
them happy and provide a minimal version. It ignores the
given default and expects the attribute to already exist.
Other parts use .delete() to delete a complete object. This
version expects the object to live in a session and also
does the final commit.
There was no place in the software telling the user the
version in use. So start by having the main server emit a
startup notice including the version string. Uses python
logging, so should be easy to reconfigure, etc.