Searching media by slug is easy on mongo. But doing the
joins in sqlalchemy is not as nice. So created a function
for doing it.
Well, and create the same function for mongo, so that it
also works.
_get_tag_name_from_entries:
1) Replace:
if q.count():
elem = q[0]
by:
for element in q:
...
break
this doesn't do two db queries but only one.
2) And another dose of Dot-Notation as usual.
When uploading a new image the processing code wants to set
the media_data['exif'] part. As exif is not yet in sql,
there is no way to make this work now. So the workaround is
to check for "no row exists yet" and just ignore exif.
If there is no media_data row for the current media (for
whatever reason, there might be good ones), let
MediaEntry.media_data not raise an exception but just
return None.
The exif display part now handles this by checking whether
.media_data.exif is defined (None has no attribute exif, so
it's undefined, all fine).
The processing should also create .gps_* instead of the old
['gps']['x']. To ease forward porting, use the new
media.media_data_init() to set the gps data in the media.
Same idea as in the previous commit.
Joar caught this one.
To reproduce
1. Create a user with an all-decimal ObjectId in mongo
2. Login using that user, while mongodb is enabled.
3. Switch instance to sql.
4. Restart.
5. Refresh any page.
This will error, because no user with that object id exists
any more.
While around, improved logging.
When searching for a user by username, there can either be
no result or one result. There is a unique constraint on
the db.
.one in mongokit raises an error for more than one result.
But that can't happen anyway. So no problem.
.one in sqlalchemy raises an error for more than one, but
that's not a problem anyway. It also raises an error for no
result. But no result is handled by the code anyway, so no
need to raise an exception.
.find_one doesn't raise an exception for more than one
result (no problem anyway) and just returns None for no
result. The later is handled by the code.
Docs:
http://docs.sqlalchemy.org/en/latest/core/engines.html#configuring-logging
So for an application utilizing python logging for real
(and MediaGoblin should) the rule is:
- Don't use echo=True,
- but reconfigure the appropiate loggers' level.
So replaced the echo=True by a line to reconfigure the
appropiate logger to achieve the same effect.
This still dumps whole bloats of SQL queries into the main
log, but at least they're not duped any more.
The name part of a MediaFile is only using a very limited
number of items. Currently things like "original" or
"thumb".
So instead of storing the string on each entry, just store
a short integer referencing the FileKeynames table and have
the appropiate string there.
Using the new check_media_slug_used it is possible to have
one generic generate_slug in the mixin class instead of in
each db class.
In the sql variant self.id is not always set: If the slug
alone would create a dupe the current code decides for "no
slug at all".
In two cases (generating a new slug and editing the slug)
it is nice to know in advance (before the db gets angry)
that the slug is used/free. So created a db utility
function to check for this on mongo and sql:
check_media_slug_used()
The current SQL layout/sqlalchemy strucuture can't detect
whether a slug isn't needed any more and delete it. So
provide a tool function to cleanup unused slugs.
It's currently not hooked to any gmg function!